Agent Skills - Yet Another Tool Standard?
Dec 24, 2025The new standard for packaging reusable workflow capabilities for filesystem-based agent harnesses.
With headlines like "95% of generative AI pilots at companies are failing" making the rounds on social media, the natural question becomes: how do you land in the successful 5%? Having supported and developed multiple enterprise AI products, I've found that the answer is to build robust product evaluation frameworks. Doing this, however, is easier said than done. It requires: developing diverse example output sets, recruiting end-user SMEs to annotate data, clustering failure modes, and aligning LLMs with human preferences to scale observability. While this process is often overlooked and difficult, following it leads to successful improvement and continued development of agentic applications.
Cisco
Part of the MarTech Portfolio & Innovation Team managing and integrating innovative marketing technology into our tech stack. Technical SME for GenAI solutions, hands-on development and internal consulting for platform-driven and internally-built AI software.
The DTH Media Corp.
Led and trained an advertising sales team of 15 reps. Designed and implemented the commission model, training program, and various new advertising products. Worked with local and national clients as the end-to-end sales and fulfillment rep. Top performing rep for a year and a half straight. Reported to a board of directors.
Degree in Economics, Statistics & Information Systems
The new standard for packaging reusable workflow capabilities for filesystem-based agent harnesses.
Defining, building, and applying LLM evaluations to improve AI products.
Understanding RLVR and creating RL environments for large language models.
LLMs should be given the same tools as humans to interact with the digital world we share.
Need to get in touch? Contact me here