A Neovim plugin for counting tokens in text files using various AI model tokenizers. Features background caching for optimal performance with file explorers and status lines.
{
"3ZsForInsomnia/token-count.nvim",
opts = {
model = "gpt-5", -- Default model for counting
},
config = true,
}
The plugin automatically creates a virtual environment and installs required Python libraries when first used:
:TokenCount " Triggers automatic setup on first use
Check setup status:
:checkhealth token-count
:TokenCount " Count tokens in current buffer
:TokenCountModel " Change the active model
:TokenCountAll " Count tokens across all open buffers
:TokenCountSelection " Count tokens in visual selection (in visual mode)
require("token-count").setup({
model = "gpt-4", -- Default model (see MODELS.md for all options)
log_level = "warn", -- Logging verbosity
context_warning_threshold = 0.4, -- Warn at 40% context usage
-- Ignore patterns for background processing (Lua patterns)
ignore_patterns = {
"node_modules/.*", -- Node.js dependencies
"%.git/.*", -- Git repository files
"%.svn/.*", -- SVN files
"build/.*", -- Build directories
"dist/.*", -- Distribution directories
"target/.*", -- Build target directories
"vendor/.*", -- Vendor dependencies
"%.DS_Store", -- macOS system files
"%.vscode/.*", -- VS Code settings
"%.idea/.*", -- IntelliJ settings
},
-- Optional: Enable official API token counting (requires API keys)
enable_official_anthropic_counter = false, -- Requires ANTHROPIC_API_KEY
enable_official_gemini_counter = false, -- Requires GOOGLE_API_KEY
})
The plugin automatically detects and processes a wide variety of text-based files including:
- Programming languages: JavaScript, TypeScript, Python, Go, Rust, Java, C/C++, Lua, Ruby, PHP, Swift, Kotlin, and many more
- Web technologies: HTML, CSS, SCSS, Vue, Svelte, JSON, XML
- Documentation: Markdown, reStructuredText, LaTeX, plain text
- Configuration: YAML, TOML, INI, environment files
- Data formats: CSV, SQL, GraphQL
- Infrastructure: Dockerfile, Terraform, Kubernetes YAML
- Files under 512KB: Full token counting
- Files over 512KB: Displayed as "LARGE" to indicate the file exceeds processing limits
- Performance: Background processing uses parallel execution for optimal speed
The plugin automatically skips common directories and files that shouldn't be processed:
node_modules
,.git
,build
,dist
,target
directories- System files like
.DS_Store
- IDE configuration directories
You can customize ignore patterns in your configuration (see Configuration section above).
π View Complete Models List β
The plugin supports 60+ models including GPT-4/5, Claude, Gemini, Llama, Grok, and more. Token counting accuracy varies by provider:
If Telescope is installed, you get an enhanced model picker with fuzzy search and preview:
:TokenCountModel " Opens Telescope picker automatically
Or use directly:
:Telescope token_count models
The picker shows:
require('lualine').setup({
sections = {
lualine_c = {
require('token-count.integrations.lualine').current_buffer
}
},
winbar = {
lualine_c = {
require('token-count.integrations.lualine').all_buffers
}
}
})
require("token-count.integrations.neo-tree").setup({
component = {
enabled = true,
show_icon = true,
icon = "πͺ",
}
})
Shows token counts next to files and directories with background processing.
Select text in visual mode and use :TokenCountSelection
:
vim.keymap.set("v", "<leader>tc", ":TokenCountSelection<CR>", {
desc = "Count tokens in visual selection",
silent = true
})
require("token-count").get_current_buffer_count(function(result, error)
if result then
print("Tokens:", result.token_count)
print("Model:", result.model_config.name)
end
end)
local models = require("token-count").get_available_models()
local model_config = require("token-count").get_current_model()
local cache = require("token-count.cache")
local file_tokens = cache.get_file_token_count("/path/to/file.lua")
cache.clear_cache()
local stats = cache.get_stats()
:checkhealth token-count
Provides comprehensive status of:
The plugin features a unified background cache that:
Cache is enabled by default with sensible settings. See ADVANCED.md for detailed configuration.
:TokenCountVenvStatus " Check detailed status
:TokenCountVenvSetup " Recreate if needed
Ensure Python 3.7+ is available:
python3 --version
Check available models:
:TokenCountModel " Browse and select models
MIT License - see LICENSE file for details.
Count AI model tokens in your files. Works locally for most models, with smart background caching that stays out of your way.
- Know if your code fits in model context windows before you hit limits
- Background counting doesn't slow down your editor - processes files when you're not typing
- Exact counts for OpenAI and DeepSeek models, smart estimates for everything else
- Seamless integrations with lualine and neo-tree show counts without extra commands
- Large file handling - estimates large background files (marked with *), full counts for active files
{
"zacharylevinw/token-count.nvim",
dependencies = {
"nvim-telescope/telescope.nvim", -- Optional: enhanced model selection
},
config = function()
require("token-count").setup({
model = "gpt-4o", -- Default model
})
end,
}
Prerequisites: Neovim 0.9.0+, Python 3.7+
The plugin automatically sets up its Python environment and dependencies on first use. Just run :TokenCount
and it handles the rest.
:TokenCount " Count tokens in current file
:TokenCountModel " Switch between models
:TokenCountAll " Count all open files
require('lualine').setup({
sections = {
lualine_c = {
require('token-count.integrations.lualine').current_buffer
}
}
})
require("token-count.integrations.neo-tree").setup({
component = {
enabled = true,
show_icon = true,
icon = "πͺ",
}
})
Shows token counts next to files and directories. Large files in the background get estimated counts (marked with *).
- Active/small files: Full accurate counting using the best available method
- Large background files (>512KB): Smart estimation to keep things fast
- Exact counting: OpenAI models (via tiktoken), DeepSeek models (via official tokenizer)
- Smart estimates: All other models via tokencost library
- Optional API counting: Set
ANTHROPIC_API_KEY
orGOOGLE_API_KEY
for exact Anthropic/Google counts (not recommended - prefer local)
Supports 60+ models including GPT-4/5, Claude, Gemini, Llama, and more. See MODELS.md for the complete list.
Switch models anytime with :TokenCountModel
(uses Telescope if available for better search).
See ADVANCED.md for:
- Public API for custom integrations
- Using with other status line plugins
- Programmatic access to token counts
- Virtual environment management
- Setup issues:
:checkhealth token-count
- Dependencies:
:TokenCountVenvStatus
- Python not found: Ensure Python 3.7+ is in your PATH
MIT License