Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$1?q=openai&page=16&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 2077 results for "openai"(2791ms)

blog-3get-old-posts.ts5 matches

@jxnblk•Updated 4 days ago
198 },
199 {
200 "title": "An Introduction to OpenAI fine-tuning",
201 "slug": "an-introduction-to-openai-fine-tuning",
202 "link": "/blog/an-introduction-to-openai-fine-tuning",
203 "description": "How to customize OpenAI to your liking",
204 "pubDate": "Fri, 25 Aug 2023 00:00:00 GMT",
205 "author": "Steve Krouse",
417 "slug": "val-town-newsletter-16",
418 "link": "/blog/val-town-newsletter-16",
419 "description": "Our seed round, growing team, Codeium completions, @std/openai, and more",
420 "pubDate": "Mon, 22 Apr 2024 00:00:00 GMT",
421 "author": "Steve Krouse",

fuadmain.tsx15 matches

@mees•Updated 4 days ago
15
16 // Get your API key from Val.town environment variables
17 const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
18 const ASSISTANT_ID = Deno.env.get("ASSISTANT_ID");
19
20 if (!OPENAI_API_KEY || !ASSISTANT_ID) {
21 throw new Error(
22 "Missing API key or Assistant ID in environment variables. Please set OPENAI_API_KEY and ASSISTANT_ID.",
23 );
24 }
29 if (!threadId || threadId === "") {
30 console.log("Creating new thread...");
31 const threadResponse = await fetch("https://api.openai.com/v1/threads", {
32 method: "POST",
33 headers: {
34 "Authorization": `Bearer ${OPENAI_API_KEY}`,
35 "Content-Type": "application/json",
36 "OpenAI-Beta": "assistants=v2",
37 },
38 body: JSON.stringify({}),
51 // Add message and run assistant
52 console.log("Running assistant...");
53 const runResponse = await fetch(`https://api.openai.com/v1/threads/${threadId}/runs`, {
54 method: "POST",
55 headers: {
56 "Authorization": `Bearer ${OPENAI_API_KEY}`,
57 "Content-Type": "application/json",
58 "OpenAI-Beta": "assistants=v2",
59 },
60 body: JSON.stringify({
87
88 const statusResponse = await fetch(
89 `https://api.openai.com/v1/threads/${threadId}/runs/${runId}`,
90 {
91 headers: {
92 "Authorization": `Bearer ${OPENAI_API_KEY}`,
93 "OpenAI-Beta": "assistants=v2",
94 },
95 },
114 console.log("Getting messages...");
115 const messagesResponse = await fetch(
116 `https://api.openai.com/v1/threads/${threadId}/messages?order=desc&limit=1`,
117 {
118 headers: {
119 "Authorization": `Bearer ${OPENAI_API_KEY}`,
120 "OpenAI-Beta": "assistants=v2",
121 },
122 },

Glancer3Remix.cursorrules4 matches

@stevekrouse•Updated 5 days ago
94Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
95
96### OpenAI
97
98```ts
99import { OpenAI } from "https://esm.town/v/std/openai";
100const openai = new OpenAI();
101const completion = await openai.chat.completions.create({
102 messages: [
103 { role: "user", content: "Say hello in a creative way" },

regexToBrainrotmain.tsx4 matches

@iostreamer•Updated 5 days ago
300 { getRandomRegexExplanation, saveRegexExplanation, getRegexExplanationById },
301 ReactMarkdown,
302 { OpenAI },
303 { renderToString },
304 { jsx, jsxs, Fragment },
306 import("https://esm.town/v/stainless_em/brainrotdb"),
307 import("npm:react-markdown@7"),
308 import("https://esm.town/v/std/openai"),
309 import("npm:react-dom@19/server.browser"),
310 import("npm:react@19/jsx-runtime"),
336 }
337
338 const openai = new OpenAI();
339
340 const abortController = new AbortController();
341 const completion = await openai.chat.completions.create({
342 messages: [
343 {

untitled-7610main.tsx25 matches

@Get•Updated 5 days ago
6 * Users input their birth details, a sign to focus on, and a life domain.
7 * The backend then uses the "Astrologer Prompt" (a detailed system prompt)
8 * to query an OpenAI model, which generates a comprehensive astrological report.
9 *
10 * Core Logic:
13 * 2. Backend (Deno Val Function):
14 * - Receives these inputs.
15 * - Constructs a user message for the OpenAI API. This message includes
16 * the raw birth details, focus sign, domain, etc.
17 * - Uses the **ENTIRE** "Astrologer Prompt" (with {{sign}} and {{domain}}
18 * placeholders filled) as the system prompt for an OpenAI API call.
19 * - Calls a powerful OpenAI model (e.g., gpt-4o).
20 * - Receives the structured JSON astrological report from OpenAI.
21 * - Sends this report back to the client for display.
22 *
30 * May 28, 2025. The LLM primed by it is assumed to have access to or
31 * knowledge of transit data for this date.
32 * - OpenAI API Key: An `OPENAI_API_KEY` environment variable must be available
33 * in the Val Town environment for `std/openai` to work.
34 *
35 * Inspired by the structure of the "Goal-Oriented Multi-Agent Stock Analysis Val".
39
40// --- Imports ---
41import { OpenAI } from "https://esm.town/v/std/openai";
42// NOTE: Deno.env is used directly for environment variables.
43
55}
56
57// --- THE ASTROLOGER PROMPT (System Prompt for OpenAI) ---
58// This will be used by the backend to instruct the AI.
59// Placeholders {{sign}} and {{domain}} will be replaced dynamically.
411}
412
413// --- Helper Function: Call OpenAI API (Adapted - Robust error handling retained) ---
414async function callOpenAI(
415 systemPrompt: string,
416 userMessage: string,
422 // Simple hash for prompt might not be as useful if {{placeholders}} change content significantly.
423 // Consider logging snippet of system prompt if needed for debugging.
424 const logPrefix = `OpenAI Call [${callId}] (${model}, JSON: ${isJsonOutputRequired})`;
425 log(
426 `[INFO] ${logPrefix}: Initiating... System prompt (template used). User message snippet: ${
430
431 try {
432 const openai = new OpenAI(); // API Key from environment
433 const response = await openai.chat.completions.create({
434 model: model,
435 messages: [{ role: "system", content: systemPrompt }, { role: "user", content: userMessage }],
439 const content = response.choices?.[0]?.message?.content;
440 if (!content) {
441 log(`[ERROR] ${logPrefix}: OpenAI API returned unexpected or empty response structure.`);
442 throw new Error("Received invalid or empty response content from AI model.");
443 }
444 log(`[SUCCESS] ${logPrefix}: OpenAI call successful.`);
445 return { role: "assistant", content: content };
446 } catch (error) {
452
453 if (errorResponseData?.message) {
454 errorMessage = `OpenAI Error (${statusCode || "unknown status"}): ${errorResponseData.message}`;
455 } else if (errorResponseData?.error?.message) {
456 errorMessage = `OpenAI Error (${statusCode || "unknown status"}): ${errorResponseData.error.message}`;
457 }
458 // ... (retain other specific error message constructions from original Val)
486 .replace(new RegExp("{{domain}}", "g"), inputs.focusDomain);
487
488 // 2. Construct the User Message for the OpenAI API call
489 // The Astrologer Prompt expects `birth_chart_data`. We will pass raw birth details
490 // and let the LLM (primed with Astrologer Prompt) handle interpretation.
509 );
510
511 // 3. Call OpenAI
512 log("[STEP] Calling OpenAI with Astrologer Prompt...");
513 // Using gpt-4o as it's capable and the astrological prompt is complex.
514 // The ASTROLOGER_SYSTEM_PROMPT_TEMPLATE implies the model should generate JSON.
515 const aiResponse = await callOpenAI(populatedSystemPrompt, userMessageJson, "gpt-4o", true, log);
516
517 // 4. Parse and Return Result
522 // (e.g., if the AI itself couldn't perform the analysis and returned an error structure as per its instructions)
523 // The ASTROLOGER_SYSTEM_PROMPT_TEMPLATE does not explicitly define an error structure from the AI side,
524 // but callOpenAI returns its own {"error": "..."} if the call itself failed.
525 if (parsedAiResponse.error && aiResponse.role === "system") { // Error from callOpenAI wrapper
526 log(`[ERROR] OpenAI call wrapper reported an error: ${parsedAiResponse.error}`);
527 return { error: "Failed to get report from Astrologer AI.", details: parsedAiResponse.error };
528 }

personalShopper.cursorrules4 matches

@bgschiller•Updated 5 days ago
100Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
101
102### OpenAI
103```ts
104import { OpenAI } from "https://esm.town/v/std/openai";
105const openai = new OpenAI();
106const completion = await openai.chat.completions.create({
107 messages: [
108 { role: "user", content: "Say hello in a creative way" },

SOAREllm.ts4 matches

@sexy606ai•Updated 5 days ago
1import { OpenAI } from "https://esm.town/v/std/openai";
2import type { LLMRequest, LLMResponse, SystemDesignRequest, SOARESystem, SystemComponent } from "../../../shared/types/core.ts";
3
4export class SOAREBrain {
5 private openai: OpenAI;
6
7 constructor() {
8 this.openai = new OpenAI();
9 }
10
11 async reason(request: LLMRequest): Promise<LLMResponse> {
12 try {
13 const completion = await this.openai.chat.completions.create({
14 model: "gpt-4o",
15 messages: [

sexy606aianalyzer.ts3 matches

@sexy606ai•Updated 5 days ago
1import { OpenAI } from "https://esm.town/v/std/openai";
2import * as cheerio from "https://esm.sh/cheerio@1.0.0-rc.12";
3import { EarningOpportunity, WebPageContent, AnalysisResult } from "../shared/types.ts";
4
5const openai = new OpenAI();
6
7export async function analyzeWebPage(url: string, focus?: string): Promise<AnalysisResult> {
247
248 try {
249 const completion = await openai.chat.completions.create({
250 model: "gpt-4o-mini",
251 messages: [

KayMain.tsx20 matches

@Get•Updated 5 days ago
7 * Uses 'npm:pdf.js-extract' for PDF extraction.
8 * Serves HTML UI & API endpoint from the same Val.
9 * Assumes 'openai' secret is set in Val Town environment variables.
10 *
11 * Last Updated: {{current_date}} (Templated Version)
18 * max_pdf_size_mb: {{max_pdf_size_mb}}, // e.g., 10
19 * text_truncation_length: {{text_truncation_length}}, // e.g., 25000
20 * openai_model_name: "{{openai_model_name}}", // e.g., "gpt-4o"
21 * contact_form_placeholders_en: { name: "Your Name", email: "Your Email", message: "Message" },
22 * contact_form_placeholders_es: { name: "Tu Nombre", email: "Tu Correo", message: "Mensaje" },
639export default async function(req: Request) {
640 // --- Dynamic Imports (Unchanged) ---
641 const { OpenAI } = await import("https://esm.town/v/std/openai");
642 // const { z } = await import("npm:zod"); // Zod might be optional if config is trusted
643 const { fetch } = await import("https://esm.town/v/std/fetch");
645
646 // --- CONFIGURATION (These would be replaced by the template variables at generation time) ---
647 const APP_CONFIG = `\{{\app_config_json}}`; // e.g., { openai_model_name: "gpt-4o", text_truncation_length: 25000, ... }
648 const ANALYSIS_AGENTS = `\{\{analysis_agents_json}}`; // Array of agent objects
649
651 async function extractPdfTextNative(data: ArrayBuffer, fileName: string, log: LogEntry[]): Promise<string | null> { /* ... original ... */ }
652
653 // --- Helper Function: Call OpenAI API (Uses APP_CONFIG for model) ---
654 async function callOpenAI(
655 openai: OpenAI,
656 systemPrompt: string,
657 userMessage: string,
658 modelFromConfig = APP_CONFIG.openai_model_name || "gpt-4o", // Use configured model
659 expectJson = false,
660 ): Promise<{ role: "assistant" | "system"; content: string | object }> {
661 /* ... original logic, but use modelFromConfig ... */
662 const model = modelFromConfig;
663 // ... rest of the original callOpenAI function
664 try {
665 const response = await openai.chat.completions.create({
666 model,
667 messages: [{ role: "system", content: systemPrompt }, { role: "user", content: userMessage }],
701 log: LogEntry[],
702 ): Promise<LogEntry[]> {
703 const openai = new OpenAI();
704 log.push({ agent: "System", type: "step", message: "Workflow started." });
705 // ... initial logging of input type ...
730 // If chaining is needed, {{previous_output}} could be another placeholder in prompts.
731
732 const agentResult = await callOpenAI(
733 openai,
734 agentSystemPrompt, // The agent's specific prompt
735 truncText, // User message is the doc text itself, or could be empty if prompt is self-contained
736 APP_CONFIG.openai_model_name,
737 agentConfig.expects_json
738 );
830 * 1. Define Application Configuration:
831 * Fill in the \`{{app_config_json}}\` placeholder with general settings for your app
832 * (e.g., OpenAI model, max file size, default language).
833 *
834 * 2. Define Analysis Agents:
836 * - `agent_id`: A unique machine-readable ID.
837 * - `agent_name_en`/`agent_name_es`: Human-readable names for UI and logs.
838 * - `system_prompt`: The OpenAI prompt for this agent. Can use `{{document_text}}`.
839 * - `expects_json`: Boolean, if the prompt asks OpenAI for JSON output.
840 * - `ui_display_info`: How to render this agent's results:
841 * - `card_title_en`/`card_title_es`: Title for the results card.
857 * and `{{app_config.document_format_accepted_label}}` (e.g. "PDF") for UI text.
858 *
859 * 5. OpenAI API Key:
860 * Ensure your environment (e.g., Val Town secrets) has the `OPENAI_API_KEY` (or the appropriate
861 * environment variable name for the `OpenAI` library) set.
862 *
863 * 6. Deployment:

sexy606aiREADME.md2 matches

@sexy606ai•Updated 5 days ago
74### Tech Stack
75- **Backend**: Hono.js for API routing
76- **AI**: OpenAI GPT-4o-mini for content analysis
77- **Web Scraping**: Cheerio for HTML parsing
78- **Frontend**: Vanilla JavaScript with TailwindCSS
91## 🔧 Environment Setup
92
93The analyzer uses OpenAI's API which is automatically configured in Val Town. No additional setup required!
94
95## 📊 What It Analyzes

openai_enrichment6 file matches

@stevekrouse•Updated 30 mins ago

openai_enrichment1 file match

@charmaine•Updated 12 hours ago
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp
kwhinnery_openai