openaiproxyREADME.md1 match
1# OpenAI Proxy
23This OpenAI API proxy injects Val Town's API keys. For usage documentation, check out https://www.val.town/v/std/openai
openaiproxymain.tsx5 matches
1import { parseBearerString } from "https://esm.town/v/andreterron/parseBearerString";
2import { API_URL } from "https://esm.town/v/std/API_URL?v=5";
3import { OpenAIUsage } from "https://esm.town/v/std/OpenAIUsage";
4import { RateLimit } from "npm:@rlimit/http";
28const authHeader = req.headers.get("Proxy-Authorization") || req.headers.get("Authorization");
29const token = authHeader ? parseBearerString(authHeader) : undefined;
30const meRes = await fetch(`${API_URL}/v1/me`, { headers: { Authorization: `Bearer ${token}` } });
31if (!meRes.ok) {
32return new Response("Unauthorized", { status: 401 });
4243// Proxy the request
44const url = new URL("." + pathname, "https://api.openai.com");
45url.search = search;
4647const headers = new Headers(req.headers);
48headers.set("Host", url.hostname);
49headers.set("Authorization", `Bearer ${Deno.env.get("OPENAI_API_KEY")}`);
50headers.set("OpenAI-Organization", Deno.env.get("OPENAI_API_ORG"));
5152const modifiedBody = await limitFreeModel(req, user);
58* Indicates that the input or output port represents base structured
59* datatype containing multi-part content of a message, generated by an LLM.
60* See [Content](https://ai.google.dev/api/rest/v1beta/Content) for details
61* on the datatype.
62*/
entireGoldLarkmain.tsx1 match
86if (!inputs.$key || inputs.$key != Deno.env.get("BB_SERVICE_KEY")) {
87return {
88$error: "Must provide an API key to access the service.",
89};
90}
131132const boardToEndpoint = (board: string) => {
133return board.replace(/\.json$/, ".api/run");
134};
135
45For now, requires you to be running a board server. This harness actually acts
6as a proxy to the board server [run API endpoint](https://breadboard-ai.github.io/breadboard/docs/reference/board-run-api-endpoint/),
7and puts a nice (well, somewhat nice) frontend on top of it.
81112The script will look for the `BB_LIVE_KEY` in your Val Town environment, which
13must contain your board server API key.
1415To use, create an HTTP val, then import the `proxy` function from this script and call it like this:
45For now, requires you to be running a board server. This harness actually acts
6as a proxy to the board server [run API endpoint](https://breadboard-ai.github.io/breadboard/docs/reference/board-run-api-endpoint/),
7and puts a nice (well, somewhat nice) frontend on top of it.
81112The script will look for the `BB_LIVE_KEY` in your Val Town environment, which
13must contain your board server API key.
1415To use, create an HTTP val, then import the `proxy` function from this script and call it like this:
cooingTomatoSquirrelmain.tsx1 match
68) {
69return async (req: Request) => {
70const { api } = await import("https://esm.town/v/pomdtr/api");
71const { deleteCookie, getCookies, setCookie } = await import("jsr:@std/http/cookie");
72
112"Access-Control-Allow-Methods": "GET,HEAD,PUT,PATCH,POST,DELETE",
113"Access-Control-Allow-Headers":
114"Content-Type, Access-Control-Allow-Headers, api-key",
115"Access-Control-Max-Age": "2592000", // 30 days
116} as Record<string, string>;
142143const boardToEndpoint = (board: string) => {
144return board.replace(/\.json$/, ".api/run");
145};
146
smallweb_openapi_guidemain.tsx10 matches
234<ul>
235<li><strong>App</strong>: Represents an OpenAI-powered application with a name and URL.</li>
236<li><strong>Config</strong>: Defines configuration options for OpenAI API integration and application settings.</li>
237<li><strong>ConsoleLog</strong>: Captures console output from OpenAI model interactions and application processes.</li>
238<li><strong>CronLog</strong>: Logs scheduled tasks related to OpenAI operations, such as model fine-tuning or dataset updates.</li>
239<li><strong>HttpLog</strong>: Records HTTP requests made to and from the OpenAI API.</li>
240</ul>
241</div>
250Use Case: Manage multiple AI-powered applications or services.
251<br>
252Example: An app named "SentimentAnalyzer" with a URL pointing to its API endpoint.
253</dd>
254
255<dt>Config</dt>
256<dd>
257Use Case: Store OpenAI API keys, model preferences, and application settings.
258<br>
259Example: Configure the GPT model to use, set token limits, and specify custom domains for AI services.
264Use Case: Debug AI model outputs and track application performance.
265<br>
266Example: Log completion tokens, response times, and any errors encountered during API calls.
267</dd>
268
276<dt>HttpLog</dt>
277<dd>
278Use Case: Monitor and analyze API usage and performance.
279<br>
280Example: Track rate limits, response times, and payload sizes for OpenAI API calls.
281</dd>
282</dl>
288<div class="collapsible-content">
289<ul>
290<li><strong>AI Service Management</strong>: Use the App and Config schemas to manage multiple AI services, each with its own settings and API keys.</li>
291<li><strong>Performance Monitoring</strong>: Utilize ConsoleLog and HttpLog to track the performance of AI models and API calls, helping optimize usage and costs.</li>
292<li><strong>Automated AI Workflows</strong>: Implement CronLog to manage and monitor automated tasks like periodic model retraining or batch processing of data through AI models.</li>
293<li><strong>Debugging and Troubleshooting</strong>: Leverage detailed logs from ConsoleLog and HttpLog to quickly identify and resolve issues in AI-powered applications.</li>
294<li><strong>Usage Analytics</strong>: Analyze HttpLog data to gain insights into API usage patterns, popular features, and potential areas for optimization or scaling.</li>
295</ul>
296<p>By implementing this schema, developers can create robust, scalable applications that effectively integrate and manage OpenAI's powerful AI capabilities while maintaining comprehensive logging and configuration control.</p>