Skip to content

can't not run directly #7

@Xavier1994

Description

@Xavier1994

I set VLLM_USE_V1=0 and run the example

python3 -m dripper.server --model_path /models/MinerU-HTML/ --state_machine v2 --port 7986
curl -X POST "http://localhost:7986/extract" -H "Content-Type: application/json" -d '{"html": "...", "url": "https://example.com"}

Traceback (most recent call last):
File "/root/MinerU-HTML/dripper/server.py", line 105, in extract_main
result_list = dripper.process(req.html)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/MinerU-HTML/dripper/api.py", line 404, in process
generate_outputs = generate(
^^^^^^^^^
File "/root/MinerU-HTML/dripper/inference/inference.py", line 104, in generate
res_list = llm.generate(prompt_list, sampling_params=sampling_params_arg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python311/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 440, in generate
self._validate_and_add_requests(
File "/usr/local/python311/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1613, in _validate_and_add_requests
raise e
File "/usr/local/python311/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1601, in _validate_and_add_requests
request_id = self._add_request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/python311/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1700, in _add_request
engine_request, tokenization_kwargs = self._process_inputs(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python311/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1680, in _process_inputs
engine_request = self.processor.process_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python311/lib/python3.11/site-packages/vllm/v1/engine/processor.py", line 383, in process_inputs
self._validate_params(params)
File "/usr/local/python311/lib/python3.11/site-packages/vllm/v1/engine/processor.py", line 185, in _validate_params
self._validate_supported_sampling_params(params)
File "/usr/local/python311/lib/python3.11/site-packages/vllm/v1/engine/processor.py", line 150, in _validate_supported_sampling_params
raise ValueError(
ValueError: vLLM V1 does not support per request user provided logits processors.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions