From Minds to Machines
My path began with questions in developmental and cognitive psychology — how people construct reality, reason about others' minds, and develop social-cognitive capacities across the lifespan. Over 40+ publications and 20,000+ research participants, I've built expertise in exactly the failure modes that matter most as AI enters high-stakes social roles: false pattern detection, manipulative social reasoning, and unreliable self-report. Now I apply that expertise to evaluate model behavior, understand AI's societal impact, and build safety frameworks grounded in behavioral science.
As AI systems increasingly serve as advisors, companions, and decision-support tools, the psychological dimensions of these interactions become central safety questions. Understanding how people — across ages, contexts, and vulnerabilities — actually experience AI is essential for building systems that help rather than harm.
Psychometric methods and experimental design produce rigorous, scalable evaluations for model behavior — from hallucination detection to social reasoning assessment.
Understanding how people actually interact with AI — across ages, contexts, and vulnerabilities — reveals where systems help and where they cause harm.
Translating cognitive biases, developmental considerations, and well-being research into concrete technical requirements and model-behavior safeguards.
AI safety is not purely a technical problem — it requires understanding the humans who use these systems. Decades of research in developmental, social, and cognitive psychology provide the empirical foundation for evaluating model behavior, anticipating societal impacts, and designing AI that works responsibly in the real world.
Cognitive-behavioral principles translated into model evaluations and prompting strategies — achieving 71% fewer LLM hallucinations.
Evaluation benchmarks for multi-layer nested belief reasoning — assessing a core capability for strategic manipulation, social engineering, and situational awareness in AI systems.
Personality-informed safety guidelines for AI behavior — ensuring model persona consistency and psychologically appropriate interactions across user contexts.
Evaluate frontier model behavior for safety and alignment across the development pipeline — from training data quality through RLHF fine-tuning to deployment-stage evaluation. Design and implement psychologically-grounded assessment frameworks in collaboration with research, policy, and engineering teams.
Advanced AI safety research methodology and collaborative project development
Computational neuroscience methods with applications to mechanistic interpretability
Emerging challenges in AI safety and biosecurity and interdisciplinary approaches to existential risks
Ethical AI development/use and applying frameworks from moral philosophy to questions related to contemporary AI
Technical alignment studies, including inner/outer alignment, interpretability, and safety frameworks
Using observational and quantitative methods to analyze how diverse populations interact with AI systems — surfacing patterns in real-world use that inform safety evaluations, policy recommendations, and model improvements.
Translating developmental, social, and cognitive psychology into implementable safety guidelines — ensuring AI systems interact with people in ways that are healthy, appropriate, and informed by empirical science on well-being and vulnerability.
Building psychometrically rigorous evaluation suites that assess model behavior across safety-critical dimensions — from advice quality in high-stakes situations to social manipulation resistance.
Seeking roles where psychological expertise directly shapes how AI systems are evaluated, improved, and governed — whether that's analyzing societal impacts of real-world AI use, building behavioral evaluations for model safety, or developing psychological safety guidelines for responsible AI development.
Explore more research areas and projects
scottdougblain@gmail.com
/in/scott-blain-phd/
@ScottDougBlain
Research profile
Let's build AI that's safe for the people who use it.