Scott Blain, PhD

From understanding human minds to building safe AI systems

Over a decade of research on human cognition—pattern detection, social reasoning, and personality mechanisms—now translated into frameworks ensuring AI capabilities remain aligned with human values.

Cognitive Science B.S. Psychology Ph.D. Psychiatry Postdoc AI Safety Researcher
0 Peer-Reviewed
Publications
0% AI Hallucination
Reduction
0+ Citations
0+ Research
Participants

Why This Matters

AI systems increasingly mirror human cognitive abilities. Understanding how fundamental cognitive capabilities fail in humans provides crucial insights for preventing similar failures in AI. Cognitive science offers a blueprint for meaningful alignment.

Research Pillars

Pattern Recognition & AI Hallucinations

From studying apophenia in human participants to developing a metacognitive framework achieving 71% reduction in LLM hallucinations. Bridging human false pattern detection to AI confabulation.

Explore Research →

Social Intelligence & AI Alignment

Demonstrated how the same theory-of-mind abilities that enable prosocial perspective taking also allow sophisticated deception. Essential for detecting and constraining deceptive AI misalignment.

Explore Research →

Cybernetic Personality Modeling

Mapped mechanisms of personality variation across multiple studies and over 20,000 participants. Currently building personality-informed frameworks for dynamic finetuning and model-organism research.

Explore Research →

Seeking Collaboration & Impact

I seek opportunities where AI safety intersects with cognitive and behavioral science. Let's collaborate to ensure AI remains beneficial as it becomes more capable.

Built with Claude Code — From Research to Reality