- π News
- π Todo-List
- π Overview
- π§ Installation
- π» Training
- π§ Evaluation
- βοΈ Citation
-
[2025-09] We release a new paper: "Scaling Generalist Data-Analytic Agents".
-
[2025-06] We release a new paper: "Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study".
- RL training code will be released soon.
- RL and Evaluation Data will be released soon.
Data-analytic agents are emerging as a key catalyst for automated scientific discovery and for the vision of Innovating AI. Current approaches, however, rely heavily on prompt engineering or multi-agent scaffolds over proprietary models, while open-source models still struggle with diverse-format, large-scale data files and long-horizon, multi-step reasoning that real-world analytics demands. This paper introduces DataMind, a scalable data synthesis and agent training recipe designed to construct generalist data-analytic agents. DataMind tackles three key challenges in building open-source data-analytic agents, including insufficient data resources, improper training strategy, and unstable code-based multi-turn rollout.
Concretely, DataMind applies
- A fine-grained task taxonomy and a recursive easy-to-hard task composition mechanism to increase the diversity and difficulty of synthesized queries;
- A knowledge-augmented trajectory sampling strategy followed by model-based and rule-based filtering;
- A dynamically adjustable training objective combining both SFT and RL losses;
- A memory-frugal and stable code-based multi-turn rollout framework.
Built on DataMind, we curate DataMind-12K, a high-quality trajectory set spanning diverse domains, task categories, and data file formats for data-analytic tasks. Trained on DataMind-12K, our DataMind-14B achieves state-of-the-art with an average score of 71.16% on multiple data analysis benchmarks, outperforming the strongest proprietary baselines DeepSeek-V3.1 and GPT-5. Our DataMind-7B also performs best among all open-source models with a score of 68.10%. We also list some empirical insights gained from our exploratory trials in the analysis experiments, aiming to provide actionable insights about agent training for the community. We will release DataMind-12K and DataMind-7B,14B for the community's future research.
Conda virtual environments offer a light and flexible setup. For different projects, we recommend using separate conda environments for management.
- Anaconda Installation
- GPU support (recommended CUDA version: 12.6)
-
SFT training
For SFT training, we use LLaMA-Factory (0.9.4.dev0) framework.
cd train/SFT/LLaMA-Factory pip install -e ".[torch,metrics]" --no-build-isolation
-
RL training
For RL training, we use verl (v0.4.0) framework.
cd train/RL/verl USE_MEGATRON=0 bash scripts/install_vllm_sglang_mcore.sh pip install -e .[vllm] pip install -e .[sglang] apt install sqlite3
-
Eval
cd eval/Datamind pip install -r requirements.txt apt install sqlite3
-
SFT training
For SFT training, we use LLaMA-Factory (0.9.4.dev0) framework.
cd train/SFT/LLaMA-Factory pip install -e ".[torch,metrics]" --no-build-isolation
-
Eval
cd eval/DataMind-Qwen2.5 pip install -r requirements.txt
Our model training was completed using the powerful and user-friendly LLaMA-Factory framework (0.9.4.dev0), which provided us with an efficient fine-tuning workflow.
The training dataset datamind_12k
in Scaling Generalist Data-Analytic Agents is available in huggingface datamind-12k. You can download it and put it in train/SFT/LLaMA-Factory/data/datamind/datamind_12k.json
.
The training dataset datamind-da-dataset
in Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study is available in train/SFT/LLaMA-Factory/data/datamind/datamind-da-dataset.json
We provide our configuration for full-parameter fine-tuning using DeepSpeed ZeRO-3 in yaml file. You can find it in train/SFT/LLaMA-Factory/examples/train_full/datamind_12k_full_sft.yaml
and train/SFT/LLaMA-Factory/examples/train_full/datamind_da_dataset_full_sft.yaml
.
You can use the following command to start training. Here we take datamind_12k_full_sft.yaml
as an example. Or you can use the shell script train/SFT/LLaMA-Factory/train.sh
.
CUDA_VISIBLE_DEVICES=0,1,2,3 llamafactory-cli train examples/train_full/datamind_12k_full_sft.yaml
Our RL training framework is modified from the verl (v0.4.0) framework, which is a flexible, efficient and production-ready RL training library for large language models (LLMs).
The training data will be released soon.
The training code will be released soon.
The evaluation data will be released soon. You should unzip the zip files and place them in the corresponding folders.
βββ model.sh
βββ requirements.txt
βββ python
β βββ compute_pass3.py
β βββ da-dev-tables
β βββ eval_python.py
β βββ eval.sh
β βββ interpreter.py
β βββ tablebench_csv
β βββ test_file
β βββ daeval_test.parquet
β βββ tablebench_test.parquet
βββ sql
βββ bird
β βββ bird_dev_csv_results
β βββ dev_sqlite_files
β βββ bird_dev_omni_ddl.json
β βββ test_file
β βββ bird_dev.parquet
βββ compute_pass3.py
βββ eval_bird.py
βββ eval.sh
βββ interpreter.py
We use vLLM to launch a local model server. You can modify the model.sh
to adapt to your own environment and run it to start the model server.
bash model.sh
You can modify the eval/python/eval.sh and run it to start Python evaluation. Notice that you should modify the base_url
and api_key
for judge model in eval/python/eval_python.py
.
PORT=19007
export OPENAI_BASE_URL=http://0.0.0.0:${PORT}/v1
export OPENAI_API_KEY=placeholder_key
python eval_python.py \
--model datamind \
--temperature 0.7 \
--top_p 0.95 \
--bs 5 \
--test_bench dabench \
--test_file test_file/daeval_test.parquet \
--csv_or_db_folder da-dev-tables \
You can modify the eval/sql/eval.sh and run it to start SQL evaluation.
PORT=19008
export OPENAI_BASE_URL=http://0.0.0.0:${PORT}/v1
export OPENAI_API_KEY=placeholder_key
python eval_bird.py \
--model datamind \
--temperature 0.7 \
--top_p 0.95 \
--bs 5 \
--test_bench bird \
--test_file bird/test_file/bird_dev.parquet \
--csv_or_db_folder bird/dev_sqlite_files \
--gold_csv_results_dir bird/bird_dev_csv_results \
--db_schema_data_path bird/bird_dev_omni_ddl.json
Note:
- Ensure that your working directory is set to the
eval/DataMind-Analysis
folder in a virtual environment.- If you have more questions, feel free to open an issue with us.
- If you need to use local model, you need to deploy it according to (Optional)
local_model.sh
.
Step 1: Download the evaluation datasets and our sft models
The evaluation datasets we used are in QRData and DiscoveryBench. The script expects data to be at data/QRData/benchmark/data/*.csv
and data/DiscoveryBench/*.csv
.
You can also download our sft models directly from Hugging Face: DataMind-Analysis-Qwen2.5-7B ,DataMind-Analysis-Qwen2.5-14B.
You can use the following bash
script to download the dataset:
bash download_eval_data.sh
Step 2: Prepare the parameter configuration
Here is the example:
config.yaml
api_key: your_api_key # your API key for the model with API service. No need for open-source models.
data_root: /path/to/your/project/DataMind/eval/data # Root directory for data. (absolute path !!!)
run_eval.sh
python do_generate.py \
--model_name DataMind-Qwen2.5-7B \ # Model name to use.
--check_model gpt-4o-mini \ # Check model to use.
--output results \ # Output directory path.
--dataset_name QRData \ # Dataset name to use, chosen from QRData, DiscoveryBench.
--max_round 25 \ # Maximum number of steps.
--api_port 8000 \ # API port number, it is necessary if the local model is used.
--bidx 0 \ # Begin index (inclusive), `None` indicates that there is no restriction.
--eidx None \ # End index (exclusive), `None` indicates that there is no restriction.
--temperature 0.0 \ # Temperature for sampling.
--top_p 1 \ # Top p for sampling.
--add_random False \ # Whether to add random files.
(Optional)local_model.sh
CUDA_VISIBLE_DEVICES=$i python -m vllm.entrypoints.openai.api_server \
--model $MODEL_PATH \ # Local model path.
--served-model-name $MODEL_NAME \ # The model name specified by you.
--tensor-parallel-size $i \ # Set the size of tensor parallel processing.
--port $port # API port number, which is consistent with the `api_port` above.
Step 3: Run the shell script
(Optional) Deploy the local model if you need.
bash local_model.sh
Run the shell script to start the process.
bash run_eval.sh
We deeply appreciate the collaborative efforts of everyone involved. We will continue to enhance and maintain this repository over the long term. If you encounter any issues, feel free to submit them to us!
If you find our work helpful, please use the following citations.
@misc{qiao2025scalinggeneralistdataanalyticagents,
title={Scaling Generalist Data-Analytic Agents},
author={Shuofei Qiao and Yanqiu Zhao and Zhisong Qiu and Xiaobin Wang and Jintian Zhang and Zhao Bin and Ningyu Zhang and Yong Jiang and Pengjun Xie and Fei Huang and Huajun Chen},
year={2025},
eprint={2509.25084},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.25084},
}
@article{zhu2025open,
title={Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study},
author={Zhu, Yuqi and Zhong, Yi and Zhang, Jintian and Zhang, Ziheng and Qiao, Shuofei and Luo, Yujie and Du, Lun and Zheng, Da and Chen, Huajun and Zhang, Ningyu},
journal={arXiv preprint arXiv:2506.19794},
year={2025}
}