COGSEC - The Ministry of Truth
How Info-snobs Pinch the Mouths of the Unwashed Masses Shut
This particular thread of investigation comes from a buddy of mine,, who writes their own Substack, Elucidating the Obfuscated. They did a brief writeup on COGSEC recently that definitely deserves a look.
I think we ought to dive a little deeper into this one and get as many eyes on this information as possible. It appears that the term COGSEC, short for Cognitive Security, was largely promoted by one Rand Waltzman, who compiled a report, The Weaponization of Information: The Need for Cognitive Security, which was presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity, on April 27th, 2017.
Some key passages from the report:
Traditionally, “information operations and warfare, also known as influence operations, includes the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent.” This definition is applicable in military as well as civilian contexts. Traditional techniques (e.g. print media, radio, movies, and television) have been extended to the cyber domain through the creation of the Internet and social media.
These technologies have resulted in a qualitatively new landscape of influence operations, persuasion, and, more generally, mass manipulation. The ability to influence is now effectively “democratized,” since any individual or group can communicate and influence large numbers of others online. Second, this landscape is now significantly more quantifiable. Data can be used to measure the response of individuals as well as crowds to influence efforts. Finally, influence is also far more concealable. Users may be influenced by information provided to them by anonymous strangers, or even by the design of an interface. In general, the Internet and social media provide new ways of constructing realities for actors, audiences, and media. It fundamentally challenges the traditional news media’s function as gatekeepers and agenda-setters.
Note the characterization of undesirable social behaviors as cognitive hacking:
Another attack, exploiting purely psychosocial features, took place in India in September 2013. The incident began when a young Hindu girl complained to her family that she had been verbally abused by a Muslim boy. Her brother and cousin reportedly went to pay the boy a visit and killed him. This spurred clashes between Hindu and Muslim communities. In an action designed to fan the flames of violence, somebody posted a gruesome video of two men being beaten to death, accompanied by a caption that identified the two men as Hindu and the mob as Muslim. Rumors spread like wildfire that the mob had murdered the girl’s brother and cousin in retaliation over the telephone and social media. It took 13,000 Indian troops to put down the resulting violence. It turned out that while the video did show two men being beaten to death, it was not the men claimed in the caption; in fact, the incident had not even taken place in India. This attack required no technical skill whatsoever; it simply required a psychosocial understanding of the place and time to post to achieve the desired effect. These last two actions are examples of cognitive hacking. Key to the successes of these cognitive hacks were the unprecedented speed and extent of disinformation distribution. Another core element of the success of these two efforts was their authors’ correct assessment of their intended audiences’ cognitive vulnerability—a premise that the audience is already predisposed to accept because it appeals to existing fears or anxieties.
This is how organizations like CISA were able to justify their disgusting mission creep, extending their work into the cognitive space with their Disinformation Governance Board. They had people like this setting the stage for them.
Rand Waltzman’s Twitter account is still up as of this writing. It has various slides showing different types of “cognitive hacks” and how they may be employed, or avoided.
His LinkedIn is very revealing:
Founding Board Member at Information Professionals Association
Santa Monica, California, United States7K followers 500+ connections
Computer science / Artificial Intelligence professional with a history of success in conceptualizing, developing and prototyping novel and innovative analytical and software solutions to real world problems. 35 years of experience performing and managing research in Artificial Intelligence applied to domains including social media and cognitive security in the information environment. Particular interest in massive scale data analysis in support of decision and analysis problems. Strong background in data science and the application of massive scale data analysis, automated reasoning and knowledge management techniques combined with sophisticated mathematical models to problems that have resisted conventional solution techniques. Drawn to problems that require analysis in a larger context and understanding of the broader implications of a solution.
Founding Board Member
Jan 2017 - Present6 years 7 months
Santa Monica, CA
The Information Professionals Association (IPA) (https://information-professionals.org/) provides a forum for information professionals to interact, collaborate, and develop solutions that enhance the cognitive security of the United States, our friends, and allies. The IPA serves as the nexus for information professionals interested in the application of soft and hard science, advanced analytics, and innovative technologies to advance security, prosperity, shared values, and international…Show more
Adjunct Senior Information Scientist
Jan 2021 - Present - 2 years 7 months
Santa Monica, CA
Senior Information Scientist
Jan 2017 - Dec 2020 - 4 years
Santa Monica, CA
Chief Technology Officer (Washington DC)
Sep 2016 - Dec 2016 - 4 months
Associate Director of Research
Jun 2015 - Aug 2016 - 1 year 3 months
Pittsburgh, PA and Arlington, VA
Associate Director for strategic planning and management of a $22 million per year internal research program at the Software Engineering Institute that has over 600 people. Active in planning overall strategic direction for the Institute. Extensive outreach and contact with high level officials in the Department of Defense, the Intelligence Community and Congress.
DARPA (Defense Advanced Research Projects Agency)
May 2010 - May 2015 - 5 years 1 month
Originated, secured funding for and manage (1) the Anomaly Detection at Multiple Scales (ADAMS) program ($50 million) in the area of insider threat detection and (2) the Social Media in Strategic Communication (SMISC) program ($50 million). Originated and manage four SBIR programs: (1) Anomaly Detection at Multiple Scales (ADAMS) is an extension of the main program focused on more mature approaches to the problem of insider threat detection and (2) SHIELD about developing techniques for high…Show more
Chief Scientist, Distributed Systems Lab
Lockheed Martin Advanced Technology Laboratories
Apr 2008 - Apr 2010 - 2 years 1 month
Cherry Hill, NJ
Provide technical leadership for Laboratory R&D in the broad areas of advanced software development and physical simulation and modeling. Conceptualize, motivate, and participate in the creation and submission of research proposals in areas such as (1) Hierarchical Spatio-Temporal Memory (2) many- core programming tools and compilers (3) model-based manufacturing and design and (4) fabrication of nano materials. Formulate and monitor lab IRAD activities and lead laboratory IRAD efforts…Show more
Royal Institute of Technology
Jul 1991 - Mar 2008 - 16 years 9 months
Research: Developed and implemented innovative Artificial Intelligence techniques for human-machine communication through machine-guided goal-directed conversational dialogs. Dialogs are generated dynamically using rule-based text assembly techniques that I developed. Built several applications including a conversational agent for use in a game produced by a major computer games studio. Designed and implemented an original technique for accessing databases using automated critiquing that…Show more
Jul 2001 - Jul 2005 - 4 years 1 month
Marina Del Rey, CA
Applied Artificial Intelligence techniques. Won a Phase II DARPA SBIR grant to develop a novel approach to model-based cyber intrusion detection and implemented the technique in a successful prototype. Won a second Phase II DARPA SBIR grant to develop techniques for automated reasoning in the context of a standard (EXCEL) spreadsheet (i.e. to develop a deductive spreadsheet) and implemented the technique in a successful prototype.
Program Manager in Artificial Intelligence
Jun 1989 - Jun 1991 - 2 years 1 month
Program Manager for Artificial Intelligence. Originated, secured funding for and managed the Image Understanding Environment project. The goal was to produce a software tool/environment to foster community development and enable efficient sharing and development of research results in image understanding among universities, industry and government. Negotiated $10 M in joint (50/50) financing for the project between DARPA and the CIA with DARPA program management and CIA contracting management…Show more
Faculty Research Assitant
Sep 1985 - May 1989 - 3 years 9 months
College Park, MD
Conducted research in the field of artificial intelligence, specifically in three dimensional spatial reasoning and geometric problem solving by machine. Studied the relationship between object representation and problem-solving knowledge representation. Developed new techniques for reasoning with analogical representations. Using analogical representations, developed and implemented an algorithm for calculating rotational and reflectional symmetries of 3-D polyhedral that ran in low order…Show more
Artificial Intelligence Engineer
Mar 1983 - Aug 1985 - 2 years 6 months
Palo Alto, CA
Applied Artificial Intelligence techniques. Developed non-standard expert systems applications in the R&D department. Created educational materials and taught courses. Was software engineer working on a Teknowledge expert system development tool. Worked on a commercial expert system for planning financial audits. Evaluated potential commercial expert system projects and wrote proposals. Programed extensively in Lisp and Prolog and used several high level expert system building tools.
Applied Physics Laboratory, U of Washington
Jun 1981 - Feb 1983 - 1 year 9 months
Analyzed underwater acoustic data using a variety of signal processing techniques. Developed a rule-based expert system for assisting in the selection of acoustic transducers that made novel use of a dynamic user-model.
Considering his DARPA and CIA affiliations, does this sound like the sort of person that you want outlining the need for cognitive security to the Senate Armed Services Committee, or helping the DHS censor the protected speech of American citizens by creating ready-made counter-influence frameworks for them to independently snap up and employ?
In response to the RAND report, a group called the Cognitive Security and Education Forum sprang up.
They have something they call the COGSEC Atlas, a list of cognitive phenomena, including various kinds of fallacies and biases.
The database is extensive. Clearly, the goal is to create checklists to test specific instances of disinformation against.
The more you dig into COGSEC, the more disturbing it gets. Have you ever heard of Jon Brewer and the DISARM Foundation?
State actors seek geo-political advantage; others seek financial gain; others disrupt and damage our collective security, democracy or health for notoriety or being caught under the influence of cult-like conspiracy theories. None of the above is new, but the internet is enabling the spread and hyper-targeting of disinformation at an unprecedented scale. Disinformation and its related terms comprise a fundamental issue for humankind. It also undermines our ability to collectively address other existential challenges. It requires urgent and effective collective action.
Max has spearheaded research that won best paper at the International Forum on Digital and Democracy and has been shared by the Cyber and Infrastructure Security Agency (CISA) Industrial Control Systems Joint Working Group. His writing on cognitive security has also been published in Infosecurity Magazine. As a participant in the Atlantic Council Digital Forensic Research Lab’s Digital Sherlocks program, he has developed OSINT skills relevant for countering influence operations. Max also has a bachelor's from Harvard.
NISOS, a private intelligence contractor in Virginia, has a podcast about the DISARM Foundation and what they do.
We discuss the mission of the DISARM Framework, which is a common framework for combating disinformation. Much like how the MITRE ATT&CK framework is used for combating cyber attacks, the DISARM framework is used to identify what Jon calls “cognitive security.” What that means is all the tactics, techniques, and procedures used in crafting disinformation attacks and influencing someone’s mind. This includes the narratives, accounts, outlets, and technical signatures used to influence a large population. We chat about what success looks like for the foundation and specific audiences used to help the population in understanding how disinformation actors work.
There is also the Hybrid CoE Research Report 7:
FIMI as a concept has been described by the European External Action Service as a “mostly non-illegal pattern of behaviour that threatens or has the potential to negatively impact values, procedures, and political processes. Such activity is manipulative in character, conducted in an intentional and coordinated manner. Actors of such activity can be state or non-state actors, including their proxies inside and outside of their own territory”. The manipulative shaping of attitudes and behaviours through information warfare to weaken public trust in democratic institutions is increasingly prevalent. With more than five billion internet users worldwide, the magnitude, potency, and proliferation of FIMI activity has been amplified – with ever more complex practices emerging as a mechanism of control from within the platforms and constraint from without. Similarly, the increased adoption of social media has served to exponentially increase the forms in which the power of FIMI is exerted and the intensity of that power. Information manipulation through social media has proved to be a key factor in recent information warfare and demands a collaborative global response.
On Peakd, ExposingExploitation compiled a fascinating list of DISARM-related documents.
Have you ever heard of SJ Terp and the COGSEC Collaborative?
The misinfosec group eventually developed a structure for cataloging misinformation techniques, based on the ATT&CK Framework. In keeping with their field's tolerance for acronyms, they called it AMITT (Adversarial Misinformation and Influence Tactics and Techniques). They've identified more than 60 techniques so far, mapping them onto the phases of an attack. Technique 49 is flooding, using bots or trolls to overtake a conversation by posting so much material it drowns out other ideas. Technique 18 is paid targeted ads. Technique 54 is amplification by Twitter bots. But the database is just getting started.
For that matter, have you ever heard of AMITT, the predecessor of DISARM?
As Rand Waltzman and Renee DiResta have noted, “Disinformation is a not a problem that can be solved. It’s like a chronic disease that can be managed — not cured- allowing the afflicted to lead a moderately normal life.”
Satisfied with this first exercise, we revisited our collection of existing models (ATT&CK model, marketing funnels, psyops, Department of Justice model), and models from Renée diResta, Ben Decker, Clint Watts and Bruce Schneier. We investigated whether their respective stages and techniques were represented in our model on the wall, and if not, added them. Surprises included that the stages in the New York Times model (used by Bruce Schneier) were actually techniques (more on this in a subsequent blog post).
Huh, that’s interesting. Renee DiResta. Another known CIA lackey.
In Matt Taibbi’s Twitter Files #19 thread, the Stanford Internet Observatory, the DHS, and the Virality Project’s internet censorship engine is covered in excruciating detail.
39.We also showed video in which Stamos introduced EIP Research Director Renee DiResta as having “worked for the CIA.” DiResta in 2021-2022 would be listed as a “Stanford scholar,” “leading” the Virality Project.
Noticing a pattern, here? First, they define people’s protected free speech online as the product of foreign influence operations. Then, they collude with social networks to censor it.
Recently observed foreign influence operations abroad demonstrate that foreign governments and related actors have the capability to quickly employ sophisticated influence techniques to target U.S. audiences with the goal to disrupt U.S. critical infrastructure and undermine U.S. interests and authorities.
Just one problem. How do they know whether or not someone’s speech is legitimate and grassroots, or dangerous Mis-, Dis-, or Malinformation from a foreign threat actor?
They don’t. They don’t even bother to examine the source. They target and censor the speech of private individuals expressing their own heartfelt opinions online, as well as the work of independent journalists trying to hold the out-of-control US Intelligence Community to account and prevent them from trampling our basic civil liberties. Take note of Sec. Alejandro Mayorkas’ response to Rep. Mike Johnson grilling him.
Mayorkas: What we do, is we disclose the tactics that adverse nation-states are utilizing to weaponize disinformation—
Johnson: No, sir. The court found specifically—it’s a finding of fact that is not disputed by the government defendants: the Biden administration, your agency, the FBI, or DHS. Not in the litigation. They determined you made—you and all of your cohorts—made no distinction between domestic speech and foreign speech, so don’t stand there and tell me under oath that you only focused on adverse—you know, uh—adversaries around the world. Foreign actors. That’s not true.
If you examine the documentation produced by those engaged in this so-called COGSEC work, it would basically be impossible for them to accomplish their objectives if they restricted their countermeasures to speech from foreign influence/psywar/propaganda sources. Once they define a certain piece of information as disinfo, it’s basically like a thought virus that they want the ability to stamp out everywhere, no matter who the source is. The source could be your grandmother on Facebook, repeating something contentious she saw after a Yandex search. They don’t care. They want it gone.
They’re not targeting people. They’re targeting memes, and they don’t care who gets caught in the crossfire.
They are using tactics adopted from MITRE’s ATT&CK framework for countering hacks, only in this case, the “hack” is the insertion of undesirable information into other people’s cognitive infrastructure by a malicious threat actor. To normal people, this is known as talking.
COGSEC also has a flipside. It is possible for those engaged in mass surveillance to become radicalized by agreeing with the arguments of dissidents, and therefore too compromised to continue their original missions. For those operatives to exercise good COGSEC, they would need some means by which to purge themselves of dangerous ideas, such has having purpose-built rhetorical and logical workarounds and loopholes with which to conveniently ignore the evidence of their own eyes. Does this sound familiar?
To quote George Orwell’s 1984:
The mind should develop a blind spot whenever a dangerous thought presented itself. The process should be automatic, instinctive. Crimestop, they called it in Newspeak. . . . He set to work to exercise himself in crimestop. He presented himself with propositions—'the Party says the Earth is flat', 'the Party says that ice is heavier than water'—and trained himself in not seeing or not understanding the arguments that contradicted them.
In order to understand how and why these wannabe despots engage in censorship, you need to know the methods and rationalizations they use. Once there is wide public recognition of these practices, the backlash will render them ineffective. The perpetrators of this scheme ironically rely on the obscurity of their ideas and the relative dearth of public scrutiny, so that they may inflict themselves on an unwary populace.
Ten years ago, the NSA whistleblower Bill Binney warned people about Turnkey Totalitarianism.
So said NSA whistleblower William Binney, holding his thumb and forefinger close together.
Binney should know what he is talking about. For some 40 years, he worked with the giant NSA and was instrumental in automating its worldwide eavesdropping network. He told James Bamford why he left the NSA in 2001:
“When they started violating the Constitution, I couldn’t stay.”
A decade later, that totalitarianism is here.
A handful of people have begun to study existential threats like the ones described above. One such individual is philosopher Nick Bostrom who in the policy summary of his “The Vulnerable World Hypothesis”, writes:
“In order for civilization to have a general capacity to deal with “black ball” inventions of this type, it would need a system of ubiquitous real-time worldwide surveillance. In some scenarios, such a system would need to be in place before the technology is invented.”
After a unipolar surveillance regime is put in place, Bostrom thinks that dangerous materials that could go to the development of existential threats would have to be supplied by a “small number of closely monitored providers.”
In the minds of these deranged technocrats, if you say anything that undermines the establishment and its stability in any way, you should be targeted and censored by a mass surveillance panopticon, even if you have entirely legitimate grievances that need airing.
We are at a pivotal moment in history, and we must reject this intolerable state of affairs and build parallel systems to preserve our privacy, civil liberties, autonomy, and dignity.
We recommend mailing your local representatives and informing them about COGSEC and the grave threat to your freedom of speech.
This article is licensed under CC BY-SA 4.0. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/