Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 26 additions & 3 deletions .speakeasy/gen.lock
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
lockVersion: 2.0.0
id: cfd52247-6a25-4c6d-bbce-fe6fce0cd69d
management:
docChecksum: a5b3a567dd4de3ab77a9f0b23d4a9f10
docChecksum: 7419f3b58a64f08efb375ead9e169446
docVersion: 1.0.0
speakeasyVersion: 1.666.0
generationVersion: 2.768.0
Expand Down Expand Up @@ -60,12 +60,18 @@ generatedFiles:
- docs/components/chaterrorerror.md
- docs/components/chatgenerationparams.md
- docs/components/chatgenerationparamsdatacollection.md
- docs/components/chatgenerationparamsimageconfig.md
- docs/components/chatgenerationparamsmaxprice.md
- docs/components/chatgenerationparamspluginautorouter.md
- docs/components/chatgenerationparamspluginfileparser.md
- docs/components/chatgenerationparamspluginmoderation.md
- docs/components/chatgenerationparamspluginresponsehealing.md
- docs/components/chatgenerationparamspluginunion.md
- docs/components/chatgenerationparamspluginweb.md
- docs/components/chatgenerationparamspreferredmaxlatency.md
- docs/components/chatgenerationparamspreferredmaxlatencyunion.md
- docs/components/chatgenerationparamspreferredminthroughput.md
- docs/components/chatgenerationparamspreferredminthroughputunion.md
- docs/components/chatgenerationparamsprovider.md
- docs/components/chatgenerationparamsresponseformatjsonobject.md
- docs/components/chatgenerationparamsresponseformatpython.md
Expand Down Expand Up @@ -126,6 +132,7 @@ generatedFiles:
- docs/components/filepath.md
- docs/components/filepathtype.md
- docs/components/forbiddenresponseerrordata.md
- docs/components/idautorouter.md
- docs/components/idfileparser.md
- docs/components/idmoderation.md
- docs/components/idresponsehealing.md
Expand All @@ -141,6 +148,7 @@ generatedFiles:
- docs/components/message.md
- docs/components/messagecontent.md
- docs/components/messagedeveloper.md
- docs/components/modality.md
- docs/components/model.md
- docs/components/modelarchitecture.md
- docs/components/modelarchitectureinstructtype.md
Expand Down Expand Up @@ -251,9 +259,11 @@ generatedFiles:
- docs/components/openresponsesreasoningtype.md
- docs/components/openresponsesrequest.md
- docs/components/openresponsesrequestignore.md
- docs/components/openresponsesrequestimageconfig.md
- docs/components/openresponsesrequestmaxprice.md
- docs/components/openresponsesrequestonly.md
- docs/components/openresponsesrequestorder.md
- docs/components/openresponsesrequestpluginautorouter.md
- docs/components/openresponsesrequestpluginfileparser.md
- docs/components/openresponsesrequestpluginmoderation.md
- docs/components/openresponsesrequestpluginresponsehealing.md
Expand Down Expand Up @@ -318,7 +328,12 @@ generatedFiles:
- docs/components/pdfengine.md
- docs/components/pdfparserengine.md
- docs/components/pdfparseroptions.md
- docs/components/percentilelatencycutoffs.md
- docs/components/percentilestats.md
- docs/components/percentilethroughputcutoffs.md
- docs/components/perrequestlimits.md
- docs/components/preferredmaxlatency.md
- docs/components/preferredminthroughput.md
- docs/components/pricing.md
- docs/components/prompt.md
- docs/components/prompttokensdetails.md
Expand Down Expand Up @@ -386,6 +401,7 @@ generatedFiles:
- docs/components/responsesoutputitemfunctioncallstatusunion.md
- docs/components/responsesoutputitemfunctioncalltype.md
- docs/components/responsesoutputitemreasoning.md
- docs/components/responsesoutputitemreasoningformat.md
- docs/components/responsesoutputitemreasoningstatuscompleted.md
- docs/components/responsesoutputitemreasoningstatusincomplete.md
- docs/components/responsesoutputitemreasoningstatusinprogress.md
Expand All @@ -399,6 +415,7 @@ generatedFiles:
- docs/components/responsesoutputmessagestatusinprogress.md
- docs/components/responsesoutputmessagestatusunion.md
- docs/components/responsesoutputmessagetype.md
- docs/components/responsesoutputmodality.md
- docs/components/responsessearchcontextsize.md
- docs/components/responseswebsearchcalloutput.md
- docs/components/responseswebsearchcalloutputtype.md
Expand Down Expand Up @@ -684,7 +701,12 @@ generatedFiles:
- src/openrouter/components/paymentrequiredresponseerrordata.py
- src/openrouter/components/pdfparserengine.py
- src/openrouter/components/pdfparseroptions.py
- src/openrouter/components/percentilelatencycutoffs.py
- src/openrouter/components/percentilestats.py
- src/openrouter/components/percentilethroughputcutoffs.py
- src/openrouter/components/perrequestlimits.py
- src/openrouter/components/preferredmaxlatency.py
- src/openrouter/components/preferredminthroughput.py
- src/openrouter/components/providername.py
- src/openrouter/components/provideroverloadedresponseerrordata.py
- src/openrouter/components/providerpreferences.py
Expand Down Expand Up @@ -716,6 +738,7 @@ generatedFiles:
- src/openrouter/components/responsesoutputitemfunctioncall.py
- src/openrouter/components/responsesoutputitemreasoning.py
- src/openrouter/components/responsesoutputmessage.py
- src/openrouter/components/responsesoutputmodality.py
- src/openrouter/components/responsessearchcontextsize.py
- src/openrouter/components/responseswebsearchcalloutput.py
- src/openrouter/components/responseswebsearchuserlocation.py
Expand Down Expand Up @@ -982,7 +1005,7 @@ examples:
slug: "<value>"
responses:
"200":
application/json: {"data": {"id": "openai/gpt-4", "name": "GPT-4", "created": 1692901234, "description": "GPT-4 is a large multimodal model that can solve difficult problems with greater accuracy.", "architecture": {"tokenizer": "GPT", "instruct_type": "chatml", "modality": "text->text", "input_modalities": ["text"], "output_modalities": ["text"]}, "endpoints": [{"name": "OpenAI: GPT-4", "model_name": "GPT-4", "context_length": 8192, "pricing": {"prompt": "0.00003", "completion": "0.00006"}, "provider_name": "OpenAI", "tag": "openai", "quantization": "fp16", "max_completion_tokens": 4096, "max_prompt_tokens": 8192, "supported_parameters": ["temperature", "top_p", "max_tokens", "frequency_penalty", "presence_penalty"], "uptime_last_30m": 99.5, "supports_implicit_caching": true}]}}
application/json: {"data": {"id": "openai/gpt-4", "name": "GPT-4", "created": 1692901234, "description": "GPT-4 is a large multimodal model that can solve difficult problems with greater accuracy.", "architecture": {"tokenizer": "GPT", "instruct_type": "chatml", "modality": "text->text", "input_modalities": ["text"], "output_modalities": ["text"]}, "endpoints": [{"name": "OpenAI: GPT-4", "model_name": "GPT-4", "context_length": 8192, "pricing": {"prompt": "0.00003", "completion": "0.00006"}, "provider_name": "OpenAI", "tag": "openai", "quantization": "fp16", "max_completion_tokens": 4096, "max_prompt_tokens": 8192, "supported_parameters": ["temperature", "top_p", "max_tokens", "frequency_penalty", "presence_penalty"], "uptime_last_30m": 99.5, "supports_implicit_caching": true, "latency_last_30m": {"p50": 0.25, "p75": 0.35, "p90": 0.48, "p99": 0.85}, "throughput_last_30m": {"p50": 45.2, "p75": 38.5, "p90": 28.3, "p99": 15.1}}]}}
"404":
application/json: {"error": {"code": 404, "message": "Resource not found"}}
"500":
Expand All @@ -991,7 +1014,7 @@ examples:
speakeasy-default-list-endpoints-zdr:
responses:
"200":
application/json: {"data": [{"name": "OpenAI: GPT-4", "model_name": "GPT-4", "context_length": 8192, "pricing": {"prompt": "0.00003", "completion": "0.00006"}, "provider_name": "OpenAI", "tag": "openai", "quantization": "fp16", "max_completion_tokens": 4096, "max_prompt_tokens": 8192, "supported_parameters": ["temperature", "top_p", "max_tokens"], "uptime_last_30m": 99.5, "supports_implicit_caching": true}]}
application/json: {"data": [{"name": "OpenAI: GPT-4", "model_name": "GPT-4", "context_length": 8192, "pricing": {"prompt": "0.00003", "completion": "0.00006"}, "provider_name": "OpenAI", "tag": "openai", "quantization": "fp16", "max_completion_tokens": 4096, "max_prompt_tokens": 8192, "supported_parameters": ["temperature", "top_p", "max_tokens"], "uptime_last_30m": 99.5, "supports_implicit_caching": true, "latency_last_30m": {"p50": 25.5, "p75": 35.2, "p90": 48.7, "p99": 85.3}, "throughput_last_30m": {"p50": 25.5, "p75": 35.2, "p90": 48.7, "p99": 85.3}}]}
"500":
application/json: {"error": {"code": 500, "message": "Internal Server Error"}}
getParameters:
Expand Down
Loading