Skip to content

[Misc]: TTFT profiling with respect to prompt length #7635

@luowenjie14

Description

@luowenjie14

Anything you want to discuss about vllm.

I am profiling TTFT and TPOT on my machine, I could not explain the behavior of TTFT thus opened this issue to seek for advice.

Below figure shows the TTFTs with respect to prompt length on my machine, the test condition is as below:

  • model: llama3-8B
  • GPU type: V100, the below figure shows the result of TP=2
  • dataset: ShareGPT

steps taken for TTFT and TPOT profiling:

  1. start the OpenAI API-compatible server using: python -m vllm.entrypoints.openai.api_server --args
  2. iterative running benchmark_serving.py to get the TTFT and TPOT, each time only send a request to server to eliminate the effect of waiting time

The profiled TTFT is as below:
Observation 1: when the prompt length is less than 400, the TTFT seems to be a flat value ~100ms. This value is consistent across different TP settings (tried TP=1, TP=2 and TP=4).
Observation 2: When prompt length is greater than 400, TTFT is linear to prompt length. This result is inline with Figure 6b this paper (https://arxiv.org/pdf/2405.06856).

I don't understand the result of observation 1, can anyone provide some insight on this result? What is the reason causingTTFT a horizontal line when the prompt length is less than 400?
ttft

Metadata

Metadata

Assignees

No one assigned

    Labels

    miscstaleOver 90 days of inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions