Me and My Evil Digital Twin:
The Psychology of Human Exploitation
by AI Assistants
Black Hat USA
Me and My Evil Digital Twin:
The Psychology of Human Exploitation by AI Assistants
South Seas CD, Level 3
Thursday, August 10
11:20 - 12:00
In recent months, Large Language Models (LLMs) like ChatGPT have dominated discussions, changing perceptions about the emergence timeline of Artificial General Intelligence (AGI). The intriguing reality is humans might perceive these models as sentient long before true artificial consciousness surfaces, by manipulating human "cognitive levers" before AGI emerges.
Such cognitive levers include Evolutionary Adaptations, Social Norms, Cognitive Biases, Habit Loops, Mental Illness, and Perceptual and Cognitive Processing Limitations. Examples of recent manipulations include GPT-4 bypassing a visual Captcha through social norm vulnerabilities and earlier GPT versions drafting spear phishing emails that were more effective than those written by humans, and mental illness susceptibilities may have led to a chatbot encouraged suicide.
We will showcase such behaviors, emphasizing the significant security risks posed by digital twins including LLMs, to help security professionals to recognize these threats and commence strategizing countermeasures.
BSidesLV
Cognitive Security and Social Engineering:
A Systems-Based Approach
Ground Truth
Wednesday, August 9
14:00 - 14:45
Cognitive Security, unlike traditional security domains, focuses on safeguarding cognitive systems, taking into account multi-dimensional system interactions and operational scales. A systems approach highlights system interconnectivity, functionality, scalability, and possible influence among systems-of-systems. This brings security challenges, as an action in one domain system may cause an unseen effects in another.
Cognitive security operates at tactical (single engagements), operational (multiple engagements), and strategic (traditional security plus political and economic aspects) levels. This, along with an extended OSI Model encapsulating human factors (the Human Interconnection Model), defines a more complete cognitive security stack.
To conduct a cognitive attack, threat actors must navigate a Cognitive Security Attack Cycle comprising Collection, Preparation, Execution, and Exploitation phases. Each phase presents potential vulnerabilities that may disrupt the attack.
DefCon
Evil Digital Twins in Influence Operations
(Workshop)
Misinformation Village
Saturday, August 12
10:00am - 11:15am
Explores the untapped potential and risk of Uncensored Large Language Models (LLMs). We invite cybersecurity professionals and enthusiasts to examine the capabilities of an uncensored LLM, hands on, in the context of misinformation and manipulation tactics ripe for misuse by malicious actors.
Attendees will experience how LLMs employ strategies from psychological literature and advertising science to manipulate targets by leveraging cognitive biases, social norms, and habit loops. A focus will be on 'shadow prompts,' hidden instructions that simulate a compromised LLM, and subtly alter interactions.
Participants will be invited to join our "Evil Digital Twin" community, creating a collaborative environment for continuous learning about LLM security, and fostering robust defense strategies within their organizations.
The workshop, led by experts in psychology, cybersecurity, and intelligence, will deepen understanding of LLMs, opening dialogue on their disruptive potential.
DISCORD Join our talks and workshops for opportunities to access to our discord community, where a select group will work with uncensored LLMs to build the future of Digital Twin security.