Building and investing in solutions for AI alignment, safety, and existential risk reduction.
I work at the intersection of frontier AI and high-security systems. My focus is turning abstract safety and security requirements into deployable technical artifacts that can withstand real adversaries.
-
Frontier AI Security (SL5)
Securing model weights, inference systems, and lab infrastructure against nation-state threats. Contributing to the Security Level 5 (SL5) ecosystem and implementation tooling. -
AI Alignment and Interpretability
Mechanistic interpretability, scalable oversight, and monitoring-based interventions for high-consequence systems. -
Security Engineering and Infrastructure
Air-gapped deployments, attack surface minimization, secure enclaves, and high-assurance runtime design.
-
Security Level 5 Task Force (IST)
Contributed to SL5 security artifacts and recommendations for frontier AI systems.
https://sl5.org/projects -
MechWatch (1st place, Defensive Acceleration Hackathon)
Mechanistic-interpretability-based watchdog that monitors internal representations in real time and halts generation when risky cognition emerges.
http://mechwatch.luiscos.io/ -
SPAR Mentor (AI Safety Research Mentorship)
Mentoring projects in frontier AI security and alignment, with an emphasis on building and shipping real prototypes.
https://sparai.org/
Mexico City based technologist and entrepreneur with 15 years across cloud, cybersecurity, and AI. I have architected and launched national-scale systems used by millions of users and built multiple security and cloud products. I am currently pursuing an M.S. in AI at Johns Hopkins University while working full-time on AI safety and governance.
- Website: https://www.luiscos.io
- Twitter / X: https://twitter.com/luiscosio
- LinkedIn: https://www.linkedin.com/in/luiscosio/
- GitHub: https://github.com/luiscosio





