A high-performance streaming Markdown renderer designed for AI command-line tools. Real-time rendering of AI output with syntax highlighting in Markdown format, supporting all mainstream AI CLI tools.
δΈζREADME | English README
Temporarily unavailable on Windows, but you can use it in WSL.
- π Streaming Rendering - Real-time AI output rendering without waiting for complete response
- π¨ Rich Format Support - Support for headers, lists, code blocks, bold, italic, and all Markdown elements
- π§ Universal Compatibility - Support for any AI command-line tool (Gemini, Claude, OpenAI, etc.)
- π¦ Zero Configuration - Transparent parameter passing, no need to adapt different tools
- π Debug Mode - Built-in debugging functionality for easy troubleshooting
- π Pipe Friendly - Perfect support for Unix pipe operations
- π Multi-language - Automatic language detection (Chinese/English)
Format | Syntax | Rendered Effect |
---|---|---|
Headers | # Header |
π΅ Colored headers |
Bold | **bold** |
Yellow bold |
Italic | *italic* |
Italic text |
Bold Italic | ***bold italic*** |
Yellow bold italic |
Inline Code | code |
π« Orange-red code |
Code Block | ```language code ``` |
π¦ Bordered code block |
Lists | β’ item |
π£ Purple lists |
Quotes | > quote |
π‘ Yellow border |
Ensure you have Rust 1.70+ installed:
# Clone repository
git clone https://github.com/n-WN/aimd.git
cd aimd
# Build
cargo build --release
# Install to system
cargo install --path .
aimd
echo "Explain what Rust programming language is" | aimd
aimd -- gemini --model gemini-2.5-flash -p "Introduce Markdown syntax"
Pipe mode is the most concise way to use, input content will be automatically passed as prompt to default Gemini:
# Simple question
echo "What is machine learning?" | aimd
# File content as input
cat question.txt | aimd
# Multi-line input
cat << EOF | aimd
Please introduce in Markdown format:
1. Python features
2. Common libraries
3. Application scenarios
EOF
Support for any AI CLI tool, just specify the complete command after --
:
# Gemini
aimd -- gemini --model gemini-2.5-flash -p "Introduce Rust"
# Claude
aimd -- claude --model sonnet -p "Explain async/await"
# OpenAI
aimd -- openai api chat.completions.create -m gpt-4 --messages '[{"role":"user","content":"Hello"}]'
# Custom tools
aimd -- my-ai-tool --custom-param value "prompt"
Use --debug
to view detailed execution information:
# Debug pipe input
echo "Test input" | aimd --debug
# Debug explicit command
aimd --debug -- gemini --help
Parameter | Description | Example |
---|---|---|
--debug |
Enable debug mode, show execution details | aimd --debug |
--help |
Show help information | aimd --help |
-- |
Parameter separator, subsequent parameters passed to AI tool | -- gemini -p "hello" |
- Low Latency: Streaming processing, first byte response time < 10ms
- Memory Efficient: Line buffer processing, constant memory usage
- Cross-platform: Support macOS, Linux, Windows
- Zero Configuration: Works out of the box, no configuration files needed
Input Source β PTY Process β Stream Parsing β Markdown Rendering β Terminal Output
β β β β β
Pipe/Args AI Tool Line Processing ANSI Coloring Real-time Display
- PTY Management: Using
pty-process
to manage pseudo terminals - Stream Parsing: Custom state machine for Markdown parsing
- ANSI Rendering: Native ANSI escape sequences for coloring
- Parameter Passthrough: Zero-loss parameter passing mechanism
The program automatically detects system language based on environment variables:
- Chinese: When
LANG
,LC_ALL
, orLC_MESSAGES
starts withzh
- English: Default for all other cases
Environment variables checked (in order):
LANG
LC_ALL
LC_MESSAGES
Tool | Status | Tested Version | Notes |
---|---|---|---|
Gemini CLI | β Full Support | latest | Google official CLI |
Claude CLI | β Full Support | latest | Anthropic official CLI |
OpenAI CLI | β Full Support | latest | OpenAI official CLI |
Ollama | β Full Support | v0.1.0+ | Local AI models |
Custom Tools | β Full Support | - | Any command that outputs text |
# AI Tool Comparison
## Main Features
- **Gemini**: Google's multimodal AI
- **Claude**: Anthropic's conversational AI
- **GPT**: OpenAI's generative AI
### Code Example
```python
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Choose the right AI tool based on specific needs
### Rendered Output

## π Development
### Build Requirements
- Rust 1.70.0+
- Cargo
### Development Environment
```bash
# Clone and enter directory
git clone https://github.com/n-WN/aimd.git
cd aimd
# Run tests
cargo test
# Development mode run
cargo run -- --debug -- echo "# Test\n**Bold text**"
# Release build
cargo build --release
aimd/
βββ src/
β βββ main.rs # Main program logic
βββ Cargo.toml # Dependency configuration
βββ README.md # Chinese documentation
βββ README.en.md # English documentation
βββ docs/ # Documentation resources
-
AI tool not found
# Ensure AI tool is installed and in PATH which gemini
-
Permission issues
# Ensure executable permissions chmod +x target/release/aimd
-
Encoding issues
# Set correct terminal encoding export LANG=en_US.UTF-8
- Use
--debug
to view detailed information - Check if AI tool works normally
- Verify parameter passing is correct
Contributions are welcome! Please follow this workflow:
- Fork this repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open Pull Request
- Use
cargo fmt
to format code - Use
cargo clippy
for code quality checks - Add necessary test cases
This project is licensed under the MIT License - see the LICENSE file for details.
- pty-process - PTY management
- Rust Community - Excellent ecosystem
- All contributors and users for their support
- Project Homepage: GitHub
- Issue Reports: Issues
- Feature Requests: Discussions
β If this project helps you, please give us a star!