Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$1?q=openai&page=50&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 2159 results for "openai"(2030ms)

PRChecker2index.tsx1 match

@tagawa•Updated 3 weeks ago
16 // Call AI service with your private API key
17 // Replace with your actual AI service URL
18 const aiResponse = await fetch("https://api.openai.com/v1/chat/completions", {
19 method: "POST",
20 headers: {

untitled-2444README.md10 matches

@all•Updated 3 weeks ago
1# Script Improver Pro
2
3A Val Town application that processes large scripts through OpenAI's GPT-4 model to make them clearer, more concise, and better written.
4
5## Features
10- Combines processed outputs seamlessly
11- Simple, responsive UI with token counting and progress tracking
12- **Direct OpenAI API Connection** - Use your own API key for direct processing
13- **Debug Console** - View API requests, responses, and token usage
14- **Script Type Detection** - Automatically identifies screenplay, technical, marketing, academic, or creative content
211. The user pastes their script into the text area and provides optional instructions
222. The application splits the text into chunks of approximately 3330 tokens each
233. Each chunk is processed sequentially through OpenAI's GPT-4 model
244. The processed chunks are combined, handling overlaps to avoid duplication
255. The improved script is displayed to the user
28
29- `/index.ts` - Main HTTP endpoint and route handler
30- `/backend/processor.ts` - Text processing logic and OpenAI integration
31- `/backend/openaiProxy.ts` - Server-side proxy for OpenAI API calls
32- `/backend/scriptTypeDetector.ts` - Automatic script type detection
33- `/shared/tokenizer.ts` - Advanced token counting and text chunking
34- `/shared/OpenAIConnector.ts` - Direct OpenAI API connection handling
35- `/frontend/index.html` - Main HTML template
36- `/frontend/index.ts` - Frontend JavaScript logic
42- **Backend**: Hono.js for HTTP routing
43- **Frontend**: Vanilla TypeScript with Tailwind CSS
44- **AI**: OpenAI GPT-4 for text processing
45- **Styling**: Tailwind CSS for responsive design
46- **Syntax Highlighting**: highlight.js for code highlighting
512. Select script type or use auto-detection
523. Choose an instruction template or write custom instructions
534. (Optional) Set your OpenAI API key for direct processing
545. Click "Improve Script" to process
556. View, compare, and download the improved script
58
59### Direct API Connection
60You can use your own OpenAI API key for direct processing, bypassing the server proxy. This can be useful for:
61- Processing very large scripts
62- Using custom model parameters
80## Limitations
81
82- Token counting is approximate and may not exactly match OpenAI's tokenizer
83- Very large scripts may take longer to process
84- The quality of improvements depends on the clarity of instructions and the quality of the input script

untitled-2444processor.ts10 matches

@all•Updated 3 weeks ago
1import { OpenAI } from "https://esm.town/v/std/openai";
2import { Tokenizer } from "../shared/tokenizer.ts";
3import { blob } from "https://esm.town/v/std/blob";
4
5// Use the standard OpenAI library for server-side processing
6const openai = new OpenAI();
7const tokenizer = new Tokenizer();
8
12const OVERLAP_TOKENS = 250;
13
14// OpenAI model configuration
15const DEFAULT_MODEL = "gpt-4o";
16const DEFAULT_TEMPERATURE = 0.7;
17
18/**
19 * Process large text by splitting into chunks and sending to OpenAI
20 */
21export async function processLargeText(
71 }
72
73 // Process with OpenAI
74 const processedChunk = await processChunkWithOpenAI(
75 chunk,
76 contextualInstructions,
287
288/**
289 * Process a single chunk with OpenAI
290 */
291async function processChunkWithOpenAI(
292 chunk: string,
293 instructions: string,
307 for (let attempt = 0; attempt < 3; attempt++) {
308 try {
309 const completion = await openai.chat.completions.create({
310 model: config.model,
311 messages: [{ role: "user", content: prompt }],

untitled-2444index.ts4 matches

@all•Updated 3 weeks ago
138});
139
140// API endpoint for OpenAI proxy
141app.post("/api/openai/chat", async c => {
142 try {
143 const body = await c.req.json();
144
145 const { proxyChatCompletion, logApiUsage } = await import("./backend/openaiProxy.ts");
146 const result = await proxyChatCompletion(body);
147
157 return c.json(result);
158 } catch (error) {
159 console.error("OpenAI proxy error:", error);
160 return c.json({
161 error: error instanceof Error ? error.message : "Unknown error occurred"

untitled-2444openaiProxy.ts7 matches

@all•Updated 3 weeks ago
1/**
2 * OpenAI API Proxy
3 *
4 * Proxies requests to OpenAI API to avoid exposing API keys to the client
5 */
6import { OpenAI } from "https://esm.town/v/std/openai";
7
8const openai = new OpenAI();
9
10/**
11 * Proxy a chat completion request to OpenAI
12 */
13export async function proxyChatCompletion(params: any): Promise<any> {
22
23 // Create completion
24 const completion = await openai.chat.completions.create({
25 model,
26 messages: params.messages,
34 return completion;
35 } catch (error) {
36 console.error("OpenAI API error:", error);
37 throw error;
38 }

untitled-2444DebugConsole.ts6 matches

@all•Updated 3 weeks ago
4 * A collapsible terminal-like console for debugging
5 */
6import { ApiEvents } from "../../shared/OpenAIConnector.ts";
7
8export class DebugConsole {
77 */
78 private setupEventListeners(): void {
79 window.addEventListener(`openai:${ApiEvents.REQUEST_STARTED}`, (e: any) => {
80 const detail = e.detail;
81 this.log('request', `Request started: ${detail.model}`, {
85 });
86
87 window.addEventListener(`openai:${ApiEvents.REQUEST_COMPLETED}`, (e: any) => {
88 const detail = e.detail;
89 this.log('success', `Request completed: ${detail.requestId}`, {
92 });
93
94 window.addEventListener(`openai:${ApiEvents.REQUEST_ERROR}`, (e: any) => {
95 const detail = e.detail;
96 this.log('error', `Error: ${detail.error}`, {
99 });
100
101 window.addEventListener(`openai:${ApiEvents.TOKEN_USAGE}`, (e: any) => {
102 const detail = e.detail;
103 this.log('info', `Token usage: ${detail.totalTokens} total (${detail.promptTokens} prompt, ${detail.completionTokens} completion)`, {
106 });
107
108 window.addEventListener(`openai:${ApiEvents.LOG}`, (e: any) => {
109 const detail = e.detail;
110 this.log('log', detail.message);

untitled-2444OpenAIConnector.ts8 matches

@all•Updated 3 weeks ago
1/**
2 * OpenAI API Connector
3 *
4 * Handles direct connections to the OpenAI API
5 */
6
14};
15
16export class OpenAIConnector {
17 private apiKey: string | null = null;
18 private baseUrl = 'https://api.openai.com/v1';
19 private useServerProxy: boolean = true;
20
104 */
105 private async createCompletionViaProxy(params: any): Promise<any> {
106 const response = await fetch('/api/openai/chat', {
107 method: 'POST',
108 headers: {
135 if (!response.ok) {
136 const errorData = await response.json().catch(() => null);
137 throw new Error(errorData?.error?.message || `OpenAI API error: ${response.status}`);
138 }
139
174 if (typeof window !== 'undefined') {
175 this.dispatchEvent = (eventName, data) => {
176 const event = new CustomEvent(`openai:${eventName}`, { detail: data });
177 window.dispatchEvent(event);
178 };
180 // Fallback for non-browser environments
181 this.dispatchEvent = (eventName, data) => {
182 console.log(`[OpenAI Event] ${eventName}:`, data);
183 };
184 }

Assistantindex.ts5 matches

@charmaine•Updated 3 weeks ago
3import { cors } from "https://esm.sh/hono@3.11.12/middleware";
4import { readFile, serveFile } from "https://esm.town/v/std/utils@85-main/index.ts";
5import { OpenAI } from "https://esm.town/v/std/openai";
6import {
7 getAuthUrl,
39app.get("/shared/*", c => serveFile(c.req.path, import.meta.url));
40
41// Initialize OpenAI client
42const openai = new OpenAI();
43
44// Helper function to get session from cookies
288
289 try {
290 // Use OpenAI to parse the natural language command
291 const completion = await openai.chat.completions.create({
292 model: "gpt-4o-mini",
293 messages: [

untitled-6906Main.tsx7 matches

@Get•Updated 3 weeks ago
1import { OpenAI } from "https://esm.town/v/std/openai";
2
3// Initialize OpenAI client (moved to server scope)
4const openai = new OpenAI();
5
6// This function will be the main entry point for the Val
12 if (typeof userCommand === "string") {
13 try {
14 const aiResponse = await getOpenAIResponse(userCommand);
15 return new Response(generateHtml(aiResponse), { headers: { "Content-Type": "text/html" } });
16 } catch (error) {
205}
206
207async function getOpenAIResponse(command: string): Promise<string> {
208 try {
209 const completion = await openai.chat.completions.create({
210 messages: [
211 { role: "user", content: command },
217 return completion.choices[0].message.content || "The air shifts, but nothing changes.";
218 } catch (error) {
219 console.error("Error calling OpenAI:", error);
220 return "The mists swirl strangely. (AI connection error.)";
221 }

stevensDemo.cursorrules4 matches

@Fewl•Updated 3 weeks ago
100Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
101
102### OpenAI
103```ts
104import { OpenAI } from "https://esm.town/v/std/openai";
105const openai = new OpenAI();
106const completion = await openai.chat.completions.create({
107 messages: [
108 { role: "user", content: "Say hello in a creative way" },

openai-client1 file match

@cricks_unmixed4u•Updated 3 days ago

openai_enrichment6 file matches

@stevekrouse•Updated 5 days ago
kwhinnery_openai
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp