Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/...?q=openai&page=64&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 2303 results for "openai"(3776ms)

untitled-2444index.ts10 matches

@all•Updated 1 month ago
6import { SyntaxHighlighter } from "./components/SyntaxHighlighter.ts";
7import { DebugConsole } from "./components/DebugConsole.ts";
8import { OpenAIConnector } from "../shared/OpenAIConnector.ts";
9import { ThemeManager } from "./components/ThemeManager.ts";
10import { ConfettiManager } from "./components/ConfettiManager.ts";
18 const syntaxHighlighter = new SyntaxHighlighter();
19 const debugConsole = new DebugConsole();
20 const openAIConnector = new OpenAIConnector();
21 const themeManager = new ThemeManager();
22 const confettiManager = new ConfettiManager();
27
28 // Set up all event handlers
29 setupFormHandling(tokenizer, scriptEditor, syntaxHighlighter, openAIConnector, confettiManager, textFormatter);
30 setupTokenCounter(tokenizer);
31 setupTemplateSelector(templateManager);
32 setupAdvancedOptions(openAIConnector, debugConsole);
33 setupResultActions(scriptEditor, textFormatter);
34 setupHistoryModal(historyManager, scriptEditor);
51 scriptEditor: ScriptEditor,
52 syntaxHighlighter: SyntaxHighlighter,
53 openAIConnector: OpenAIConnector,
54 confettiManager: ConfettiManager,
55 textFormatter: TextFormatter
144 const apiKeyInput = document.getElementById("apiKey") as HTMLInputElement;
145 if (apiKeyInput && apiKeyInput.value && localStorage.getItem("useDirectApi") === "true") {
146 // Process directly with OpenAI API
147 const prompt = createPromptForScriptType(
148 text,
153 );
154
155 const response = await openAIConnector.createChatCompletion({
156 model,
157 messages: [{ role: "user", content: prompt }],
314
315// Set up advanced options
316function setupAdvancedOptions(openAIConnector: OpenAIConnector, debugConsole: DebugConsole) {
317 const advancedOptionsBtn = document.getElementById("advancedOptionsBtn") as HTMLButtonElement;
318 const advancedOptions = document.getElementById("advancedOptions") as HTMLDivElement;
350
351 if (!apiKey.startsWith("sk-")) {
352 alert("Invalid API key format. OpenAI API keys start with 'sk-'");
353 return;
354 }
356 try {
357 // Set the API key in the connector
358 openAIConnector.setApiKey(apiKey);
359
360 // Store the preference (but not the key itself)

untitled-2444index.html2 matches

@all•Updated 1 month ago
318 <div class="md:col-span-3">
319 <div class="flex items-center justify-between">
320 <label for="apiKey" class="block text-sm font-medium text-gray-700 dark:text-gray-300">OpenAI API Key (Optional)</label>
321 <span class="text-xs text-gray-500 dark:text-gray-400">Direct API connection</span>
322 </div>
649
650 <footer class="mt-8 text-center text-sm text-gray-500 dark:text-gray-400">
651 <p>Powered by OpenAI GPT-4 • <a href="#" id="viewSourceLink" target="_top" class="text-indigo-600 dark:text-indigo-400 hover:underline">View Source</a></p>
652 </footer>
653 </div>

PRChecker2index.tsx1 match

@tagawa•Updated 1 month ago
16 // Call AI service with your private API key
17 // Replace with your actual AI service URL
18 const aiResponse = await fetch("https://api.openai.com/v1/chat/completions", {
19 method: "POST",
20 headers: {

untitled-2444README.md10 matches

@all•Updated 1 month ago
1# Script Improver Pro
2
3A Val Town application that processes large scripts through OpenAI's GPT-4 model to make them clearer, more concise, and better written.
4
5## Features
10- Combines processed outputs seamlessly
11- Simple, responsive UI with token counting and progress tracking
12- **Direct OpenAI API Connection** - Use your own API key for direct processing
13- **Debug Console** - View API requests, responses, and token usage
14- **Script Type Detection** - Automatically identifies screenplay, technical, marketing, academic, or creative content
211. The user pastes their script into the text area and provides optional instructions
222. The application splits the text into chunks of approximately 3330 tokens each
233. Each chunk is processed sequentially through OpenAI's GPT-4 model
244. The processed chunks are combined, handling overlaps to avoid duplication
255. The improved script is displayed to the user
28
29- `/index.ts` - Main HTTP endpoint and route handler
30- `/backend/processor.ts` - Text processing logic and OpenAI integration
31- `/backend/openaiProxy.ts` - Server-side proxy for OpenAI API calls
32- `/backend/scriptTypeDetector.ts` - Automatic script type detection
33- `/shared/tokenizer.ts` - Advanced token counting and text chunking
34- `/shared/OpenAIConnector.ts` - Direct OpenAI API connection handling
35- `/frontend/index.html` - Main HTML template
36- `/frontend/index.ts` - Frontend JavaScript logic
42- **Backend**: Hono.js for HTTP routing
43- **Frontend**: Vanilla TypeScript with Tailwind CSS
44- **AI**: OpenAI GPT-4 for text processing
45- **Styling**: Tailwind CSS for responsive design
46- **Syntax Highlighting**: highlight.js for code highlighting
512. Select script type or use auto-detection
523. Choose an instruction template or write custom instructions
534. (Optional) Set your OpenAI API key for direct processing
545. Click "Improve Script" to process
556. View, compare, and download the improved script
58
59### Direct API Connection
60You can use your own OpenAI API key for direct processing, bypassing the server proxy. This can be useful for:
61- Processing very large scripts
62- Using custom model parameters
80## Limitations
81
82- Token counting is approximate and may not exactly match OpenAI's tokenizer
83- Very large scripts may take longer to process
84- The quality of improvements depends on the clarity of instructions and the quality of the input script

untitled-2444processor.ts10 matches

@all•Updated 1 month ago
1import { OpenAI } from "https://esm.town/v/std/openai";
2import { Tokenizer } from "../shared/tokenizer.ts";
3import { blob } from "https://esm.town/v/std/blob";
4
5// Use the standard OpenAI library for server-side processing
6const openai = new OpenAI();
7const tokenizer = new Tokenizer();
8
12const OVERLAP_TOKENS = 250;
13
14// OpenAI model configuration
15const DEFAULT_MODEL = "gpt-4o";
16const DEFAULT_TEMPERATURE = 0.7;
17
18/**
19 * Process large text by splitting into chunks and sending to OpenAI
20 */
21export async function processLargeText(
71 }
72
73 // Process with OpenAI
74 const processedChunk = await processChunkWithOpenAI(
75 chunk,
76 contextualInstructions,
287
288/**
289 * Process a single chunk with OpenAI
290 */
291async function processChunkWithOpenAI(
292 chunk: string,
293 instructions: string,
307 for (let attempt = 0; attempt < 3; attempt++) {
308 try {
309 const completion = await openai.chat.completions.create({
310 model: config.model,
311 messages: [{ role: "user", content: prompt }],

untitled-2444index.ts4 matches

@all•Updated 1 month ago
138});
139
140// API endpoint for OpenAI proxy
141app.post("/api/openai/chat", async c => {
142 try {
143 const body = await c.req.json();
144
145 const { proxyChatCompletion, logApiUsage } = await import("./backend/openaiProxy.ts");
146 const result = await proxyChatCompletion(body);
147
157 return c.json(result);
158 } catch (error) {
159 console.error("OpenAI proxy error:", error);
160 return c.json({
161 error: error instanceof Error ? error.message : "Unknown error occurred"

untitled-2444openaiProxy.ts7 matches

@all•Updated 1 month ago
1/**
2 * OpenAI API Proxy
3 *
4 * Proxies requests to OpenAI API to avoid exposing API keys to the client
5 */
6import { OpenAI } from "https://esm.town/v/std/openai";
7
8const openai = new OpenAI();
9
10/**
11 * Proxy a chat completion request to OpenAI
12 */
13export async function proxyChatCompletion(params: any): Promise<any> {
22
23 // Create completion
24 const completion = await openai.chat.completions.create({
25 model,
26 messages: params.messages,
34 return completion;
35 } catch (error) {
36 console.error("OpenAI API error:", error);
37 throw error;
38 }

untitled-2444DebugConsole.ts6 matches

@all•Updated 1 month ago
4 * A collapsible terminal-like console for debugging
5 */
6import { ApiEvents } from "../../shared/OpenAIConnector.ts";
7
8export class DebugConsole {
77 */
78 private setupEventListeners(): void {
79 window.addEventListener(`openai:${ApiEvents.REQUEST_STARTED}`, (e: any) => {
80 const detail = e.detail;
81 this.log('request', `Request started: ${detail.model}`, {
85 });
86
87 window.addEventListener(`openai:${ApiEvents.REQUEST_COMPLETED}`, (e: any) => {
88 const detail = e.detail;
89 this.log('success', `Request completed: ${detail.requestId}`, {
92 });
93
94 window.addEventListener(`openai:${ApiEvents.REQUEST_ERROR}`, (e: any) => {
95 const detail = e.detail;
96 this.log('error', `Error: ${detail.error}`, {
99 });
100
101 window.addEventListener(`openai:${ApiEvents.TOKEN_USAGE}`, (e: any) => {
102 const detail = e.detail;
103 this.log('info', `Token usage: ${detail.totalTokens} total (${detail.promptTokens} prompt, ${detail.completionTokens} completion)`, {
106 });
107
108 window.addEventListener(`openai:${ApiEvents.LOG}`, (e: any) => {
109 const detail = e.detail;
110 this.log('log', detail.message);

untitled-2444OpenAIConnector.ts8 matches

@all•Updated 1 month ago
1/**
2 * OpenAI API Connector
3 *
4 * Handles direct connections to the OpenAI API
5 */
6
14};
15
16export class OpenAIConnector {
17 private apiKey: string | null = null;
18 private baseUrl = 'https://api.openai.com/v1';
19 private useServerProxy: boolean = true;
20
104 */
105 private async createCompletionViaProxy(params: any): Promise<any> {
106 const response = await fetch('/api/openai/chat', {
107 method: 'POST',
108 headers: {
135 if (!response.ok) {
136 const errorData = await response.json().catch(() => null);
137 throw new Error(errorData?.error?.message || `OpenAI API error: ${response.status}`);
138 }
139
174 if (typeof window !== 'undefined') {
175 this.dispatchEvent = (eventName, data) => {
176 const event = new CustomEvent(`openai:${eventName}`, { detail: data });
177 window.dispatchEvent(event);
178 };
180 // Fallback for non-browser environments
181 this.dispatchEvent = (eventName, data) => {
182 console.log(`[OpenAI Event] ${eventName}:`, data);
183 };
184 }

Assistantindex.ts5 matches

@charmaine•Updated 1 month ago
3import { cors } from "https://esm.sh/hono@3.11.12/middleware";
4import { readFile, serveFile } from "https://esm.town/v/std/utils@85-main/index.ts";
5import { OpenAI } from "https://esm.town/v/std/openai";
6import {
7 getAuthUrl,
39app.get("/shared/*", c => serveFile(c.req.path, import.meta.url));
40
41// Initialize OpenAI client
42const openai = new OpenAI();
43
44// Helper function to get session from cookies
288
289 try {
290 // Use OpenAI to parse the natural language command
291 const completion = await openai.chat.completions.create({
292 model: "gpt-4o-mini",
293 messages: [

openai1 file match

@awei82•Updated 7 hours ago

openai-client4 file matches

@cricks_unmixed4u•Updated 20 hours ago
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp
kwhinnery_openai