LogoLMSpeed
  • Home
  • Free
  • Categories
  • Models
  • Docs
LogoLMSpeed

The best API speed test tool

GitHubGitHubTwitterX (Twitter)Email
Product
  • Features
  • Pricing
  • FAQ
  • Documentation
Legal
  • Cookie Policy
  • Privacy Policy
  • Terms of Service
© 2025 LMSpeed All Rights Reserved.
BACK TO INDEX
ai.api.xn--fiqs8s logo

ai.api.xn--fiqs8s

Website
Updated 12/7/2025
ai.api.xn--fiqs8s interface preview
Performance Stats
Avg Speed
152.81t/s
Latency
2.81s
Total Tests
175
Models
24

About ai.api.xn--fiqs8s

Provides unified API access to over 300 AI models from multiple providers, including OpenAI, Claude, and Gemini.

QwenQwen2.5QwenQwen2DeepSeekDeepSeek-V3ChatGLMGLM-4

China AI API is an aggregation platform offering a single interface to access a wide range of AI models. It supports over 300 models from providers such as OpenAI (GPT-4o), Claude, Gemini, MoonshotAI, Zhipu, and DeepSeek. Key features include API aggregation for unified calls, global deployment with multi-region nodes, HTTPS encryption with API key management, load balancing with fault tolerance, and automatic scaling for high concurrency. The platform reports an average response time under 100ms and 99.9% service availability. Use cases include AI testing, development, and integration across various applications. Pricing includes a free trial with 0.2credit,pay−as−you−goratesat¥0.6/0.2 credit, pay-as-you-go rates at ¥0.6/0.2credit,pay−as−you−goratesat¥0.6/1 per unit, and custom enterprise plans.

Supported Models

ModelSpeedLatencyTests
BLOOMZ-7B
875.69 t/s
2.46s
5
gemini-1.5-flash-8b
275.08 t/s
1.28s
10
gemini-1.5-flash-latest
252.88 t/s
1.26s
10
gemini-1.5-flash-002
239.65 t/s
1.62s
5
gemini-2.0-flash-thinking-exp-01-21
230.82 t/s
7.47s
5
gemini-2.0-flash-lite-preview-02-05
188.38 t/s
1.06s
15
gemini-2.0-flash
177.30 t/s
0.98s
20
gemini-1.5-flash
158.26 t/s
0.91s
5
o3-mini
148.38 t/s
7.42s
15
gpt-4o
135.41 t/s
1.43s
5
gemini-2.0-flash-exp
125.02 t/s
1.40s
5
qwen-72b
102.22 t/s
3.11s
5
qwen2.5-72b-instruct
92.64 t/s
2.57s
10
qwen2.5-72b-instruct
92.64 t/s
2.57s
10
gpt-4o-mini
78.22 t/s
1.05s
5
o3-mini-all
74.59 t/s
5.09s
10
Phi-4
43.07 t/s
1.32s
10
claude-3-5-sonnet-20241022
39.36 t/s
4.75s
5
qwen2.5-7b-instruct
34.15 t/s
1.01s
5
qwen2.5-7b-instruct
34.15 t/s
1.01s
5
deepseek-reasoner
29.86 t/s
4.71s
5
deepseek-v3
26.71 t/s
3.41s
5
deepseek-r1
25.85 t/s
5.73s
10
glm-4-flash
25.16 t/s
1.30s
5

Recent Test Records

TimeModelSpeedLatency
Apr 6, 11:36 AMgemini-2.0-flash
193.77 t/s
0.86s
Apr 6, 11:36 AMgemini-2.0-flash
198.39 t/s
0.92s
Apr 6, 11:35 AMgemini-2.0-flash
191.79 t/s
1.06s
Feb 20, 08:16 AMgemini-1.5-flash-latest
174.31 t/s
1.21s
Feb 20, 07:49 AMgemini-1.5-flash-latest
331.45 t/s
1.32s
Feb 20, 07:45 AMPhi-4
41.62 t/s
0.90s
Feb 20, 07:44 AMglm-4-flash
25.16 t/s
1.30s
Feb 14, 04:31 PMqwen-72b
102.22 t/s
3.11s
Feb 14, 04:10 PMgemini-2.0-flash-lite-preview-02-05
175.81 t/s
0.87s
Feb 14, 04:08 PMqwen2.5-72b-instruct
96.05 t/s
2.68s