Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/image-url.jpg?q=openai&page=129&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 1603 results for "openai"(1198ms)

tanDinosaurmain.tsx2 matches

@stevekrouseUpdated 11 months ago
1// let's ask openai's new gpt-4o model to tell us a joke
2
3import { chat } from "https://esm.town/v/stevekrouse/openai";
4
5const { content } = await chat("Tell me a joke", { max_tokens: 50, model: "gpt-4o" });

chatREADME.md4 matches

@onixoniUpdated 11 months ago
1# OpenAI ChatGPT helper function
2
3This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
4
5```ts
6import { chat } from "https://esm.town/v/stevekrouse/openai";
7
8const { content } = await chat("Hello, GPT!");
11
12```ts
13import { chat } from "https://esm.town/v/stevekrouse/openai";
14
15const { content } = await chat(

chatmain.tsx10 matches

@onixoniUpdated 11 months ago
1import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
2
3async function getOpenAI() {
4 // if you don't have a key, use our std library version
5 if (Deno.env.get("OPENAI_API_KEY") === undefined) {
6 const { OpenAI } = await import("https://esm.town/v/std/openai");
7 return new OpenAI();
8 } else {
9 const { OpenAI } = await import("npm:openai");
10 return new OpenAI();
11 }
12}
13
14/**
15 * Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
16 * This function can handle both single string inputs and arrays of message objects.
17 * It supports various GPT models, allowing for flexibility in choosing the model based on the application's needs.
25 options?: Omit<ChatCompletionCreateParamsNonStreaming, "messages">,
26): Promise<ChatCompletion & { content: string }> {
27 const openai = await getOpenAI();
28 const messages = Array.isArray(input) ? input : [{ role: "user", content: input }];
29 const createParams: ChatCompletionCreateParamsNonStreaming = {
33 messages,
34 };
35 const completion = await openai.chat.completions.create(createParams);
36
37 return { ...completion, content: completion.choices[0].message.content };

gpt4main.tsx3 matches

@stevekrouseUpdated 11 months ago
1import { OpenAI } from "npm:openai";
2const openai = new OpenAI();
3
4export const gpt4 = async (content: string, max_tokens: number = 50) => {
5 let chatCompletion = await openai.chat.completions.create({
6 messages: [{
7 role: "user",

weatherGPTmain.tsx3 matches

@stevekrouseUpdated 11 months ago
1import { email } from "https://esm.town/v/std/email?v=11";
2import { OpenAI } from "npm:openai";
3
4let location = "brooklyn ny";
8).then(r => r.json());
9
10const openai = new OpenAI();
11let chatCompletion = await openai.chat.completions.create({
12 messages: [{
13 role: "user",

caloriesmain.tsx1 match

@stevekrouseUpdated 11 months ago
2import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
3import { modifyImage } from "https://esm.town/v/stevekrouse/modifyImage";
4import { chat } from "https://esm.town/v/stevekrouse/openai";
5import { Hono } from "npm:hono@3";
6

limit_model_forkmain.tsx6 matches

@stdUpdated 12 months ago
38
39 // Proxy the request
40 const url = new URL("." + pathname, "https://api.openai.com");
41 url.search = search;
42
43 const headers = new Headers(req.headers);
44 headers.set("Host", url.hostname);
45 headers.set("Authorization", `Bearer ${Deno.env.get("OPENAI_API_KEY")}`);
46 headers.set("OpenAI-Organization", Deno.env.get("OPENAI_API_ORG"));
47
48 const openAIRes = await fetch(url, {
49 method: req.method,
50 headers,
53 });
54
55 const res = new Response(openAIRes.body, openAIRes);
56
57 // Remove internal header
58 res.headers.delete("openai-organization");
59
60 return res;

limit_model_forkREADME.md2 matches

@stdUpdated 12 months ago
1# OpenAI Proxy
2
3This OpenAI API proxy injects Val Town's API keys. For usage documentation, check out https://www.val.town/v/std/openai

OpenAImain.tsx10 matches

@pattysiUpdated 12 months ago
1import { type ClientOptions, OpenAI as RawOpenAI } from "npm:openai";
2
3/**
4 * API Client for interfacing with the OpenAI API. Uses Val Town credentials.
5 */
6export class OpenAI {
7 private rawOpenAIClient: RawOpenAI;
8
9 /**
10 * API Client for interfacing with the OpenAI API. Uses Val Town credentials.
11 *
12 * @param {number} [opts.timeout=10 minutes] - The maximum amount of time (in milliseconds) the client will wait for a response before timing out.
19 */
20 constructor(options: Omit<ClientOptions, "baseURL" | "apiKey" | "organization"> = {}) {
21 this.rawOpenAIClient = new RawOpenAI({
22 ...options,
23 baseURL: "https://std-openaiproxy.web.val.run/v1",
24 apiKey: Deno.env.get("valtown"),
25 organization: null,
28
29 get chat() {
30 return this.rawOpenAIClient.chat;
31 }
32
33 readonly beta = {
34 get chat(): RawOpenAI["beta"]["chat"] {
35 return this.rawOpenAIClient.beta.chat;
36 },
37 };

OpenAIREADME.md12 matches

@pattysiUpdated 12 months ago
1# OpenAI - [Docs ↗](https://docs.val.town/std/openai)
2
3Use OpenAI's chat completion API with [`std/openai`](https://www.val.town/v/std/openai). This integration enables access to OpenAI's language models without needing to acquire API keys.
4
5Streaming is not yet supported. Upvote the [HTTP response streaming feature request](https://github.com/val-town/val-town-product/discussions/14) if you need it!
8
9```ts title="Example" val
10import { OpenAI } from "https://esm.town/v/std/openai";
11
12const openai = new OpenAI();
13
14const completion = await openai.chat.completions.create({
15 messages: [
16 { role: "user", content: "Say hello in a creative way" },
25## Limits
26
27While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind:
28
29* **Usage Quota**: We limit each user to 10 requests per minute.
32If these limits are too low, let us know! You can also get around the limitation by using your own keys:
33
341. Create your own API key on [OpenAI's website](https://platform.openai.com/api-keys)
352. Create an [environment variable](https://www.val.town/settings/environment-variables?adding=true) named `OPENAI_API_KEY`
363. Use the `OpenAI` client from `npm:openai`:
37
38```ts title="Example" val
39import { OpenAI } from "npm:openai";
40
41const openai = new OpenAI();
42```
43
44
45[📝 Edit docs](https://github.com/val-town/val-town-docs/edit/main/src/content/docs/std/openai.mdx)

translateToEnglishWithOpenAI1 file match

@shlmtUpdated 1 day ago

testOpenAI1 file match

@stevekrouseUpdated 3 days ago
lost1991
import { OpenAI } from "https://esm.town/v/std/openai"; export default async function(req: Request): Promise<Response> { if (req.method === "OPTIONS") { return new Response(null, { headers: { "Access-Control-Allow-Origin": "*",