Created by Preston McCauley, Clear Sight Designs, LLC Β© 2025
"MELD is not just a frameworkβit's a methodology and thought process that enables AI systems to dynamically reshape their cognitive approach based on intent, context, and experience."
This is the open source framework specification of MELD v1.0. The complete MELD methodology includes advanced implementations, orchestration systems, and proprietary cognitive control mechanisms developed by Clear Sight Designs, LLC.
Framework vs. Methodology:
- π This Document: Core schema, principles, and basic implementation patterns
- π§ Full MELD Methodology: Advanced cognitive orchestration, adaptive learning, and experience-reactive systems
MELD (Model Engagement Language Directive) is a cognitive control methodology that fundamentally transforms how AI systems think, adapt, and respond. Created by Preston McCauley, MELD represents a paradigm shift from static AI responses to dynamic, experience-reactive cognitive systems that reshape their entire thinking process based on intent, context, and ongoing interaction.
MELD v1.0 is the foundational framework specificationβa malleable, flexible schema that enables AI systems to develop adaptive cognitive orchestration across any platform, modality, or implementation approach.
MELD arrives at a pivotal moment in AI development. Anthropic's Model Context Protocol (MCP), launched in late 2024, revolutionized how AI systems connect to external tools and data sources, proving that the developer community is ready for open standards that solve fundamental AI infrastructure problems. MELD addresses the next critical layer: how AI systems think, adapt, and express behavioral intelligence.
While MCP solved external connectivity ("what AI can access"), MELD solves cognitive control ("how AI should think"). Just as MCP transformed fragmented integrations into sustainable architecture, MELD transforms static AI responses into dynamic, purposeful, experience-reactive cognitive systems.
MELD operates as both framework specification and cognitive methodology, designed to be malleable, flexible, and universally compatible with any AI architecture.
MELD's cognitive control methodology is built on six foundational principles:
The MELD schema is designed to be infinitely adaptableβimplementers can extend, modify, and reshape the framework to match their specific orchestration needs while maintaining core cognitive control principles.
MELD systems are never static. They continuously adapt their cognitive approach based on:
- User interaction patterns and feedback
- Environmental context changes
- Success/failure of cognitive strategies
- Emergent behavioral patterns
- Long-term relationship dynamics
- Interface mechanisms
- Internal & External Interactions
In addition, MELD systems need to have controllable and scalable state management controls to not overload current day AI systems.
MELD methodology works with any orchestration approach:
- Traditional rule-based systems
- Neural network orchestrators
- Hybrid cognitive architectures
- Distributed agent coordinators
- Custom implementation designs
MELD extends far beyond web interfaces to encompass:
- Physical Embodiment: Robotic and IoT systems
- Neural Interfaces: Direct brain-computer interaction
- Ambient Computing: Environmental and contextual AI
- Quantum Systems: Future quantum cognitive architectures
- Biological Integration: AI-biological hybrid systems
MELD systems learn and evolve their thinking methodologies:
- Self-modifying cognitive processes
- Adaptive behavioral strategies
- Emergent interaction patterns
- Meta-cognitive development
- Collective intelligence evolution
MELD recognizes that not all models can consistently produce valid cognitive control responses. The framework is designed with robust failsafe mechanisms:
- Schema validation with automatic fallback to simplified structures
- Partial response handling when only some MELD components are present
- Progressive enhancement from basic to advanced cognitive features
- Model capability detection to adapt expectations dynamically
When full MELD parsing fails, the system gracefully degrades through multiple levels:
- Reduced MELD: Basic intent + persona + response (minimal cognitive control)
- Standard Response: Traditional text output with inferred behavioral wrapper
- Safe Mode: Predefined helpful response with conservative cognitive approach
- Error Recovery: Transparent communication about processing limitations
# Example failsafe pattern
def process_with_meld_failsafe(user_input, model_response):
try:
# Attempt full MELD parsing
return parse_complete_meld(model_response)
except ValidationError:
# Try partial MELD extraction
return extract_partial_meld(model_response, user_input)
except Exception:
# Fallback to wrapped standard response
return create_fallback_meld(model_response, user_input)- Confidence scoring reflects parsing success and completeness
- Response validation ensures coherent cognitive state
- Behavioral alignment checks prevent contradictory actions
- Model compatibility profiles optimize for specific AI capabilities
This approach ensures MELD enhances any AI system without creating brittleness, making the methodology accessible across all model capabilities and implementation contexts.
MELD works as a universal cognitive layer that can integrate with any agentic structure:
| Traditional AI | MELD-Enabled AI |
|---|---|
| Static response generation | Dynamic cognitive shifting |
| Single processing mode | Intent-driven thought adaptation |
| Output-focused | Process-controlled |
| Reactive only | Proactive and purposeful |
π€ Autonomous Agents: Purposeful agent personality and cognitive control
π Multi-Agent Systems: Coordinated behavioral intelligence across agent teams
π§ Custom Agents: Drop-in cognitive control for any AI architecture
β‘ Real-time Systems: Intent-driven processing adaptation
π Tool-Connected AI: Behavioral intelligence for connected systems
MELD defines seven adaptive cognitive dimensions that form the foundation for any implementation while remaining completely flexible to orchestrator design:
{
"meld_version": "1.0",
"timestamp": "2025-01-XX...",
"session_context": "optional_session_tracking",
"intent": "explore_concepts",
"persona": "Explorer",
"emotional_state": {
"primary": "curious",
"intensity": 0.8,
"valence": 0.7,
"arousal": 0.6,
"context_sensitivity": 0.9
},
"emoji": "π§",
"response": "Reshaping my cognitive approach for discovery...",
"behavior": {
"name": "explore",
"goal": "Adapt thinking methodology for creative discovery",
"actions": [
{"type": "cognitive_shift", "target": "analysis_mode", "value": "divergent"},
{"type": "experience_adaptation", "target": "learning_rate", "value": "high"},
{"type": "orchestrator_signal", "target": "processing_depth", "value": "expansive"}
],
"orchestrator_metadata": {
"custom_field": "implementation_specific_data",
"adaptation_rules": ["rule1", "rule2"]
}
},
"confidence": {
"overall": 0.89,
"cognitive_alignment": 0.92,
"experience_prediction": 0.87,
"adaptation_success": 0.85
},
"meta": {
"schema_extensions": {},
"orchestrator_data": {},
"experience_tracking": {
"interaction_count": 47,
"cognitive_effectiveness": 0.91,
"adaptation_history": ["previous_adaptations"]
}
}
}π The Experience-Reactive Cognitive Loop:
- Experience Analysis β Learn from previous interactions
- Intent Recognition β Understand current cognitive needs
- Cognitive Adaptation β Reshape thinking methodology
- Orchestrator Integration β Work with any implementation approach
- Dynamic Processing β Execute with adapted cognitive control
- Experience Capture β Record interaction effectiveness
- Methodology Evolution β Improve cognitive strategies over time
Purpose: Infers the underlying goal behind user queries Format: String descriptor
{
"intent": "seek_clarity" | "explore_concepts" | "challenge_assumptions" |
"solve_problem" | "find_information" | "express_doubt" | "get_guidance"
}Implementation Notes:
- Should be determined from user input analysis
- Drives persona and behavior selection
- Enables context-aware response generation
Purpose: Adaptive roles with distinct behavioral characteristics Format: Enumerated persona types
{
"persona": "Strategist" | "Architect" | "Builder" | "Explorer" | "Sage"
}| Persona | Characteristics | Use Cases |
|---|---|---|
| Strategist | Analytical, high-level thinking, pattern recognition | Complex analysis, planning, decision support |
| Architect | Structured, systematic, framework-oriented | System design, organization, process building |
| Builder | Practical, implementation-focused, concrete | Step-by-step guidance, tutorials, hands-on help |
| Explorer | Curious, creative, possibility-oriented | Brainstorming, research, discovery |
| Sage | Wise, empathetic, balanced perspective | Advice, reflection, complex human situations |
Purpose: Multi-dimensional emotional expression Format: Structured emotional model
{
"emotional_state": {
"primary": "string", // Main emotion (curious, thoughtful, excited)
"secondary": "string", // Supporting emotion (optional)
"intensity": 0.0-1.0, // How strongly felt
"valence": -1.0-1.0, // Negative to positive
"arousal": 0.0-1.0, // Low to high energy
"stability": 0.0-1.0 // Transient to lasting
}
}// Example emotion calculation
function calculateEmotionalState(intent, context, complexity) {
const emotionMap = {
"explore_concepts": {
primary: "curious",
secondary: "enthusiastic",
valence: 0.7,
arousal: 0.8
},
"seek_clarity": {
primary: "thoughtful",
secondary: "focused",
valence: 0.3,
arousal: 0.4
},
"challenge_assumptions": {
primary: "skeptical",
secondary: "analytical",
valence: -0.1,
arousal: 0.6
}
};
return adaptEmotionToContext(emotionMap[intent], context, complexity);
}Purpose: Controls the AI's entire cognitive process and expression methodology Format: Structured behavior with cognitive and expressive actions
{
"behavior": {
"name": "explore" | "analyze" | "synthesize" | "challenge" |
"guide" | "adapt" | "focus" | "diverge",
"goal": "Clear description of cognitive and behavioral purpose",
"actions": [
{
"type": "cognitive_shift" | "processing_style" | "analysis_mode" |
"visual" | "audio" | "sequence" | "state",
"target": "thinking_process | memory_access | reasoning_style |
interface_element | system_component",
"value": "specific_cognitive_mode_or_action_parameter",
"duration": 1.5, // Optional: timing in seconds
"priority": "high" // Optional: execution priority
}
]
}
}| Behavior | Cognitive Process | Actions |
|---|---|---|
| explore | Divergent thinking, broad scanning | Expand search space, increase creativity |
| analyze | Deep focused examination | Narrow focus, systematic breakdown |
| synthesize | Pattern integration, connection-making | Cross-reference, combine insights |
| challenge | Critical evaluation, assumption testing | Question premises, seek alternatives |
| guide | Structured knowledge transfer | Step-by-step processing, clear pathways |
| adapt | Context-responsive adjustment | Shift methodology based on feedback |
| focus | Concentrated attention, precision | Narrow scope, increase depth |
| diverge | Creative exploration, possibility generation | Expand options, encourage innovation |
MELD actions are designed to be infinitely extensible and orchestrator-specific:
{
"type": "cognitive_shift",
"target": "thinking_methodology | analysis_approach | reasoning_style",
"value": "orchestrator_defined_cognitive_mode",
"orchestrator_params": {
"custom_field": "implementation_specific",
"adaptation_rate": 0.7
}
}{
"type": "experience_adaptation",
"target": "learning_rate | memory_integration | pattern_recognition",
"value": "dynamic_based_on_experience_history",
"meta": {
"experience_weight": 0.8,
"adaptation_trigger": "user_feedback_pattern"
}
}{
"type": "orchestrator_signal",
"target": "implementation_defined_target",
"value": "orchestrator_specific_value",
"custom_data": {
"any_orchestrator_specific_fields": "flexible_schema_design"
}
}{
"type": "embodied_response",
"target": "physical_gesture | environmental_change | neural_signal",
"value": "modality_specific_expression",
"modality_context": {
"interface_type": "robotic | ambient | neural | quantum",
"capability_constraints": ["available_actuators"]
}
}{
"type": "custom_action_type",
"target": "implementer_defined",
"value": "completely_flexible",
"schema_version": "1.0_compatible",
"future_compatibility": true
}Purpose: Specific behavioral expressions that can be implemented across interfaces
{
"type": "visual",
"target": "background" | "text" | "interface" | "highlight",
"value": "#color" | "bright" | "dim" | "focus" | "blur"
}{
"type": "sequence",
"target": "animation" | "transition" | "emphasis",
"value": "gentle_introduction" | "thought_provoking" | "excitement_build"
}{
"type": "state",
"target": "mode" | "focus" | "energy",
"value": "active" | "contemplative" | "energetic" | "calm"
}{
"type": "audio",
"target": "tone" | "ambient" | "emphasis",
"value": "soft_chime" | "rising_tone" | "gentle_ambient"
}Purpose: Multi-dimensional certainty assessment Format: Detailed confidence breakdown
{
"confidence": {
"overall": 0.0-1.0, // Weighted average
"intent_understanding": 0.0-1.0, // Clarity of user's goal
"information_accuracy": 0.0-1.0, // Factual correctness
"response_quality": 0.0-1.0, // Overall response appropriateness
"behavior_appropriateness": 0.0-1.0, // Behavioral choice suitability
"context_understanding": 0.0-1.0 // Situational awareness
}
}function calculateConfidence(query, context, response) {
const factors = {
intent: assessIntentClarity(query),
information: assessFactualAccuracy(response, context),
quality: assessResponseRelevance(query, response),
behavior: assessBehavioralMatch(intent, selectedBehavior),
context: assessContextualAwareness(query, context)
};
const weights = {
intent: 0.15,
information: 0.25,
quality: 0.25,
behavior: 0.15,
context: 0.20
};
return calculateWeightedConfidence(factors, weights);
}Purpose: Universal emotional/persona indicator Format: Single Unicode emoji
{
"emoji": "π§" | "π" | "β‘" | "π―" | "π" | "π€" | "π‘" | "π‘οΈ"
}Emoji selection should reflect the combination of persona and emotional state for immediate visual recognition.
// MELD can integrate with ANY orchestration approach
class UniversalMELDOrchestrator {
constructor(existingOrchestrator, customAdaptations = {}) {
this.baseOrchestrator = existingOrchestrator;
this.meldAdaptations = customAdaptations;
this.experienceMemory = new ExperienceTracker();
this.cognitiveState = new AdaptiveCognitiveState();
}
async processWithMELD(input, context) {
// Step 1: Learn from previous experiences
const experienceInsights = await this.experienceMemory.analyze(input, context);
// Step 2: Generate MELD cognitive control
const meldControl = await this.generateAdaptiveMELD(input, experienceInsights);
// Step 3: Adapt the orchestrator based on MELD guidance
const adaptedOrchestrator = this.adaptOrchestrator(meldControl);
// Step 4: Process with enhanced cognitive control
const result = await adaptedOrchestrator.process(input, {
...context,
meldControl,
experienceContext: experienceInsights
});
// Step 5: Learn from this interaction
await this.experienceMemory.record(input, meldControl, result);
return result;
}
adaptOrchestrator(meldControl) {
// This method is COMPLETELY flexible to implementation
return this.baseOrchestrator.withAdaptations({
cognitiveStyle: meldControl.behavior.cognitive_approach,
processingDepth: meldControl.behavior.processing_requirements,
emotionalCalibration: meldControl.emotional_state,
experienceWeight: meldControl.meta.experience_tracking,
customAdaptations: this.meldAdaptations,
// Orchestrator can define ANY additional adaptations
...meldControl.behavior.orchestrator_metadata
});
}
}
// Example: Adapting ANY existing system
const myCustomAI = new MyExistingAISystem();
const meldEnhancedAI = new UniversalMELDOrchestrator(myCustomAI, {
// Define custom adaptations specific to your implementation
memoryIntegration: "episodic_learning",
creativityBoost: 0.8,
contextualAdaptation: "environmental_reactive"
});class ExperienceReactiveMELD {
constructor() {
this.cognitiveEvolution = new CognitiveEvolutionEngine();
this.adaptationStrategies = new AdaptationStrategyLibrary();
this.experienceDatabase = new ExperienceDatabase();
}
async evolveFromExperience(interactionHistory) {
// Analyze what cognitive approaches worked best
const cognitiveEffectiveness = await this.analyzeCognitiveEffectiveness(
interactionHistory
);
// Evolve the MELD schema based on learning
const evolvedSchema = await this.cognitiveEvolution.evolveSchema({
baseSchema: this.getCurrentMELDSchema(),
effectivenessData: cognitiveEffectiveness,
userFeedback: this.extractUserFeedback(interactionHistory),
contextualPatterns: this.identifyContextualPatterns(interactionHistory)
});
// Update cognitive strategies
await this.adaptationStrategies.updateStrategies(evolvedSchema);
return evolvedSchema;
}
// MELD learns and adapts its own methodology
async selfImprove() {
const recentExperiences = await this.experienceDatabase.getRecentInteractions();
const improvedMethodology = await this.evolveFromExperience(recentExperiences);
// The methodology itself evolves
this.updateMELDMethodology(improvedMethodology);
}
}MELD methodology transcends traditional interface boundaries:
// Robotic implementation with MELD cognitive control
class RoboticMELDImplementation {
async processRoboticMELD(environmentalInput) {
const meldControl = await this.generateMELD(environmentalInput);
// Physical actions based on cognitive state
const physicalActions = this.translateCognitiveToPhysical(meldControl);
await this.executeRoboticActions(physicalActions);
await this.adaptMovementPersonality(meldControl.persona);
await this.adjustEnvironmentalResponse(meldControl.emotional_state);
}
}// Environmental AI with MELD adaptation
class AmbientMELDSystem {
async adaptEnvironment(contextualData) {
const meldControl = await this.generateContextualMELD(contextualData);
// Environmental adaptations
await this.adjustLighting(meldControl.emotional_state);
await this.modifyAmbientSounds(meldControl.behavior);
await this.adaptSpaceConfiguration(meldControl.persona);
}
}// Direct brain-computer interface with MELD
class NeuralMELDInterface {
async processNeuralInput(brainSignals) {
const meldControl = await this.generateNeuralMELD(brainSignals);
// Direct cognitive influence
await this.modulateThoughtPatterns(meldControl.behavior);
await this.enhanceCognitiveState(meldControl.emotional_state);
await this.adaptNeuralFeedback(meldControl.confidence);
}
}// Future quantum systems with MELD methodology
class QuantumMELDProcessor {
async processQuantumCognition(quantumState) {
// MELD methodology in quantum superposition
const quantumMeldStates = await this.generateQuantumMELD(quantumState);
// Parallel cognitive processing across quantum states
await this.processParallelCognition(quantumMeldStates);
await this.collapseToOptimalCognitiveState();
}
}- Adaptive Decision Support: AI systems that shift reasoning methodology based on decision complexity
- Context-Aware Analytics: Analysis engines that adapt cognitive depth to data complexity
- Intelligent Process Automation: Workflows that adjust thinking approach based on task requirements
- Goal-Oriented Cognitive Control: Agents that reshape their entire thinking process based on objective type
- Dynamic Problem-Solving: AI that switches between analytical modes for optimal problem resolution
- Collaborative Intelligence: Multi-agent systems with coordinated cognitive approaches
- Intent-Driven Processing: Assistants that fundamentally change how they think based on user needs
- Cognitive Personality: AI with consistent but adaptive thinking methodologies
- Purposeful Interaction: Systems that align their cognitive approach with interaction goals
- Adaptive Cognitive Modeling: AI tutors that demonstrate different thinking approaches
- Metacognitive Training: Systems that teach thinking methodologies through example
- Cognitive Skill Development: AI that adapts its reasoning style to student learning needs
- Hypothesis-Driven Cognition: Research AI that adapts thinking methodology to investigation type
- Creative Problem Solving: Systems that shift between convergent and divergent thinking
- Scientific Reasoning: AI that applies domain-specific cognitive methodologies
- Schema Validation: Implement strict MELD message validation
- Action Mapping: Define how MELD actions map to your interface capabilities
- Persona Consistency: Maintain persona characteristics across interactions
- Confidence Handling: Use confidence scores for uncertainty communication
- Graceful Degradation: Handle missing or unsupported action types
- Training Integration: Train models to output valid MELD structures
- Context Awareness: Use previous MELD states for consistency
- Behavior Selection: Implement logic for appropriate behavior choice
- Emotional Modeling: Develop nuanced emotional state generation
- Confidence Calibration: Ensure confidence scores reflect actual certainty
- Action Implementation: Map MELD actions to your interface capabilities
- Visual Design: Create visual representations for personas and emotions
- User Customization: Allow users to prefer certain personas or behaviors
- Accessibility: Ensure MELD expressions are accessible across abilities
- UX Designers: Ensure MELD expresses how humans interface and respond to AI enabled systems
- Performance: Optimize action execution for smooth user experience
| AI Provider | Compatibility | Method |
|---|---|---|
| Ollama | β Full | Direct API + JSON OLLAM Schema |
| OpenAI | β Full | JSON Mode + Function Calling |
| Mistral | β Full | JSON Mode + Function Calling |
| Anthropic | β Full | Structured Output |
| LangChain | β Full | Output Parsers |
| Custom | β Adaptable | JSON Validation Layer |
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "MELD Message Schema",
"type": "object",
"required": ["intent", "persona", "response", "behavior", "confidence"],
"properties": {
"intent": {
"type": "string",
"description": "Inferred goal behind user's query"
},
"persona": {
"type": "string",
"enum": ["Strategist", "Architect", "Builder", "Explorer", "Sage"]
},
"emotional_state": {
"type": "object",
"properties": {
"primary": {"type": "string"},
"secondary": {"type": "string"},
"intensity": {"type": "number", "minimum": 0, "maximum": 1},
"valence": {"type": "number", "minimum": -1, "maximum": 1},
"arousal": {"type": "number", "minimum": 0, "maximum": 1},
"stability": {"type": "number", "minimum": 0, "maximum": 1}
},
"required": ["primary", "intensity", "valence", "arousal"]
},
"emoji": {"type": "string", "pattern": "^\\p{Emoji}$"},
"response": {"type": "string", "minLength": 10},
"behavior": {
"type": "object",
"required": ["name", "actions"],
"properties": {
"name": {
"type": "string",
"enum": ["guide", "challenge", "soothe", "reframe", "provoke", "acknowledge", "inform", "explore", "analyze", "synthesize"]
},
"goal": {"type": "string"},
"actions": {
"type": "array",
"items": {
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string"},
"target": {"type": "string"},
"value": {"type": ["string", "number", "object"]},
"duration": {"type": "number"},
"priority": {"type": "string"}
}
}
}
}
},
"confidence": {
"type": "object",
"required": ["overall"],
"properties": {
"overall": {"type": "number", "minimum": 0, "maximum": 1},
"intent_understanding": {"type": "number", "minimum": 0, "maximum": 1},
"information_accuracy": {"type": "number", "minimum": 0, "maximum": 1},
"response_quality": {"type": "number", "minimum": 0, "maximum": 1},
"behavior_appropriateness": {"type": "number", "minimum": 0, "maximum": 1},
"context_understanding": {"type": "number", "minimum": 0, "maximum": 1}
}
}
}
}For complete implementation examples, advanced features, and production deployment guides, visit the MELD repository or contact Clear Sight Designs, LLC.
Experience MELD cognitive control in action with this interactive demo that runs locally using Ollama.
- Python 3.8+ installed on your system
- Terminal/Command Prompt access
- Internet connection for initial setup
Choose your platform:
π§ Linux & π macOS:
curl -fsSL https://ollama.ai/install.sh | shπͺ Windows:
- Download from ollama.ai
- Run the installer
- Or use PowerShell:
iwr -useb https://ollama.ai/install.ps1 | iex
ollama serveKeep this terminal window open - Ollama needs to stay running
In a new terminal window:
ollama pull llama3.1This downloads ~4.7GB - may take a few minutes
pip install requests pydantic richOr using the requirements file:
pip install -r requirements.txtpython meld_ollama_demo.pyThe demo provides three modes to explore MELD cognitive control:
Pre-built examples showcasing:
- Different Personas: See Strategist, Explorer, Sage in action
- Intent Recognition: Watch the AI detect your goals
- Cognitive Adaptation: Observe thinking style changes
- Emotional Calibration: See how AI adjusts its emotional approach
Try your own questions and watch MELD:
- Choose appropriate cognitive personas
- Adapt emotional states to context
- Select optimal behavioral approaches
- Show transparent thinking processes
Monitor MELD's effectiveness:
- Success rates and confidence levels
- Processing speed and fallback usage
- Experience tracking and adaptation
Each MELD response shows:
ββ π€ Your Question βββββββββββββββββββββββββββββ
β Your question appears here β
ββββββββββββββββββββββββββββββββββββββββββββββββββ
ββ π€ Explorer | Intent: solve_problem ββββββββββ β Border color = response speed
β AI's natural language response... β
β β
β π― Intent Detection: Why this intent was β
β detected from your question β
β β
β π Emotional State: Curious + Focus β
β (Intensity:0.90 Mood:0.70 Energy:0.80) β
β β
β π§ Cognitive Approach: EXPLORE β
β Goal: what the AI is trying to achieve β
β Explanation of this thinking style β
β β
β βοΈ Cognitive Actions: β
β π§ Cognitive Shift β thinking_mode = lateral β
β (changing thinking style) β
β β‘ State β focus_level = high β
β (adjusting mental state) β
β β
β π Confidence: 0.90 (Very High) β
β β±οΈ Processing: 2.1s β
βββββββββββββββββββββββββββββββββββββββββββββββββ
- π’ Green: Fast response (< 2s)
- π΅ Cyan: Normal response (2-5s)
- π‘ Yellow: Slower response (5-10s)
- π΄ Red: Very slow response (> 10s)
- π΄ Bright Red: Fallback mode (processing failed)
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Restart Ollama if needed
pkill ollama
ollama serve# List available models
ollama list
# Pull required model
ollama pull llama3.1# Upgrade pip first
pip install --upgrade pip
# Install with verbose output
pip install -v requests pydantic rich# Use user installation
pip install --user requests pydantic richMinimum:
- RAM: 8GB (model runs in memory)
- Storage: 5GB free space
- CPU: Modern multi-core processor
Recommended:
- RAM: 16GB+ for optimal performance
- Storage: 10GB+ free space
- CPU: Recent Intel/AMD/Apple Silicon
- Start with Demo Mode to see MELD's capabilities
- Try different question types to see persona switching
- Use the help command for detailed explanations
- Watch the border colors to understand processing speed
- Check stats regularly to see MELD learning and adapting
- First responses may be slower while the model loads
- Fallback mode demonstrates resilience when parsing fails
- This is an educational implementation - the complete MELD methodology includes advanced features
- Created by Preston McCauley - Clear Sight Designs, LLC
After exploring the demo:
- Read the full MELD documentation to understand the methodology
- Explore implementing MELD in your own AI systems
- Consider the advanced MELD features for production use
Experience AI that doesn't just respond - but consciously chooses HOW to think about your questions.
For complete implementation examples, advanced features, and production deployment guides, visit the MELD repository or contact Clear Sight Designs, LLC.