Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$%7Burl%7D?q=openai&page=89&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 3231 results for "openai"(1828ms)

stevensDemo.cursorrules4 matches

@TheyClonedMe•Updated 2 months ago
100Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
101
102### OpenAI
103```ts
104import { OpenAI } from "https://esm.town/v/std/openai";
105const openai = new OpenAI();
106const completion = await openai.chat.completions.create({
107 messages: [
108 { role: "user", content: "Say hello in a creative way" },

llm-tipsexample.html1 match

@cricks_unmixed4u•Updated 2 months ago
180 <div class="flex justify-between items-center">
181 <div>
182 <h5 class="font-medium">OpenAI Whisper API</h5>
183 <p class="text-sm text-gray-600">Direct audio transcription using Whisper</p>
184 </div>

medmain.tsx5 matches

@svc•Updated 2 months ago
1import { fetch } from "https://esm.town/v/std/fetch";
2import { OpenAI } from "https://esm.town/v/std/openai";
3import { z } from "npm:zod";
4
646// --- CORE BACKEND LOGIC ---
647const llm = async (sysPrompt, userPrompt, log, tid, model = "gpt-4o") => {
648 log("DEBUG", "LLM", `Calling OpenAI for TID ${tid}`);
649 try {
650 const oa = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
651 const completion = await oa.chat.completions.create({
652 model,
656 });
657 const content = completion.choices[0]?.message?.content;
658 if (!content) throw new Error("OpenAI returned no content.");
659 return JSON.parse(content);
660 } catch (err) {
661 log("ERROR", "LLM", `OpenAI API call failed for TID ${tid}`, { error: err.message });
662 throw new Error(`AI model error: ${err.message}`);
663 }

llm-tipsexample.html1 match

@cricks_unmixed4u•Updated 2 months ago
183 <div class="flex justify-between items-center">
184 <div>
185 <h5 class="font-medium">OpenAI Whisper API</h5>
186 <p class="text-sm text-gray-600">Direct audio transcription using Whisper</p>
187 </div>

kuadraticknowledge.md8 matches

@cricks_unmixed4u•Updated 2 months ago
143```
144
145### OpenAI
146
147Do not use val.town std library, import from https://esm.town/v/cricks_unmixed4u/openai-client/main.tsx
148
149TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiting, and uses Val Town's SQLite for persistent rate limit tracking.
150
151Key Components
152
153Message Type: Defines the structure for chat messages (role and content).
154ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method. This method sends an array of messages to the specified OpenAI chat model and returns the assistant's response.
155GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI that enforces a global rate limit (requests per second) using a persistent SQLite table.
156GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of requests in the current time window and throws an error if the limit is exceeded. It uses a table (global_rate_limit_1) in Val Town's SQLite.
157ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database at startup.
158Usage
159Use ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.
160Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat completions, suitable for shared or public-facing endpoints.
161Val Town/Platform Notes
162Uses Val Town’s standard SQLite API for persistent storage.
163Designed for server-side use (no browser-specific code).
164No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.
165
166### Email

kuadraticopenai-client.mdc7 matches

@cricks_unmixed4u•Updated 2 months ago
1---
2description: You can use openai-client when integrating vals to an LLM
3globs:
4alwaysApply: false
5---
6TypeScript interface for interacting with OpenAI's chat models, with optional global rate limiting, and uses Val Town's SQLite for persistent rate limit tracking.
7Key Components
8Message Type: Defines the structure for chat messages (role and content).
9ChatOpenAI(model: string): Factory function returning an object with an invoke(messages) method. This method sends an array of messages to the specified OpenAI chat model and returns the assistant's response.
10GlobalRateLimitedChatOpenAI(model: string, requestsPerSecond: number): Decorator for ChatOpenAI that enforces a global rate limit (requests per second) using a persistent SQLite table.
11GlobalRateLimiter: Class that implements the rate limiting logic. It checks the number of requests in the current time window and throws an error if the limit is exceeded. It uses a table (global_rate_limit_1) in Val Town's SQLite.
12ensureGlobalRateLimitTableExists: Ensures the rate limit tracking table exists in the database at startup.
13Usage
14Use ChatOpenAI(model) for direct, unlimited access to OpenAI chat completions.
15Use GlobalRateLimitedChatOpenAI(model, requestsPerSecond) to enforce a global rate limit on chat completions, suitable for shared or public-facing endpoints.
16Val Town/Platform Notes
17Uses Val Town’s standard SQLite API for persistent storage.
18Designed for server-side use (no browser-specific code).
19No secrets are hardcoded; OpenAI API keys are managed by the OpenAI SDK/environment.

kuadratic.cursorrules4 matches

@cricks_unmixed4u•Updated 2 months ago
94Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
95
96### OpenAI
97
98```ts
99import { OpenAI } from "https://esm.town/v/std/openai";
100const openai = new OpenAI();
101const completion = await openai.chat.completions.create({
102 messages: [
103 { role: "user", content: "Say hello in a creative way" },

send-transcriptsmain.tsx15 matches

@sunnyatlightswitch•Updated 2 months ago
1import { createClient } from "https://esm.sh/@supabase/supabase-js@2.39.3";
2import { Hono } from "https://esm.sh/hono@3.11.7";
3import { OpenAI } from "https://esm.sh/openai@4.28.0";
4import { Resend } from "https://esm.sh/resend@3.2.0";
5import { email } from "https://esm.town/v/std/email";
19const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_KEY);
20
21// OpenAI configuration
22const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY")
23 || "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";
24
25const openai = new OpenAI({
26 apiKey: OPENAI_API_KEY,
27});
28
63}
64
65// Function to summarize transcript using OpenAI
66async function summarizeTranscript(text: string) {
67 try {
68 const completion = await openai.chat.completions.create({
69 model: "gpt-4o-mini",
70 messages: [
154 const summary = completion.choices[0]?.message?.content;
155 if (!summary) {
156 throw new Error("No summary generated by OpenAI");
157 }
158
159 console.log("Transcript summarized by OpenAI");
160 console.log("OpenAI completion ID:", completion.id);
161
162 return {
166 };
167 } catch (error) {
168 console.error("Failed to summarize transcript with OpenAI:", error);
169 throw error;
170 }
302 }
303}
304async function saveFinalReport(summary: string, email: string, openaiThreadId: string) {
305 try {
306 const { data, error } = await supabase
310 body: summary,
311 email: email,
312 openai_thread_id: openaiThreadId,
313 },
314 ])
392 }
393
394 // Summarize transcript with OpenAI and save to final_reports
395 try {
396 console.log("Starting OpenAI summarization...");
397 const summaryResult = await summarizeTranscript(body.text);
398

untitled-7369main.tsx1 match

@handshake5424•Updated 2 months ago
3
4const html = await fetchText(
5 "https://en.wikipedia.org/wiki/OpenAI",
6);
7const $ = load(html);

habitualmain.tsx5 matches

@legal•Updated 2 months ago
1import { OpenAI } from "https://esm.town/v/std/openai";
2
3// --- TYPE DEFINITIONS ---
892 try {
893 if (req.method === "POST") {
894 const openai = new OpenAI();
895 const body = await req.json();
896
897 switch (action) {
898 case "suggestHabit": {
899 const completion = await openai.chat.completions.create({
900 model: "gpt-4o",
901 messages: [
910 }
911 case "suggestHabitSet": {
912 const completion = await openai.chat.completions.create({
913 model: "gpt-4o",
914 messages: [
923 }
924 case "suggestIcons": {
925 const completion = await openai.chat.completions.create({
926 model: "gpt-4o",
927 messages: [

openai-usage1 file match

@nbbaier•Updated 1 day ago

hello-realtime5 file matches

@jubertioai•Updated 4 days ago
Sample app for the OpenAI Realtime API
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp
kwhinnery_openai