Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$1?q=openai&page=51&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 2159 results for "openai"(2842ms)

beeAifrontend.html1 match

@armadillomike•Updated 3 weeks ago
305
306 <footer class="bg-yellow-500 text-black p-3 text-center text-sm">
307 <p>BeeGPT - Powered by OpenAI | <a href="https://val.town" target="_top" class="underline">View on Val Town</a></p>
308 </footer>
309

beeAiREADME.md6 matches

@armadillomike•Updated 3 weeks ago
1# BeeGPT - Bee-Themed AI Assistant
2
3A fun, bee-themed wrapper for OpenAI's GPT models that adds bee personality, puns, and facts to AI responses, plus a bee-themed image generator.
4
5## Features
13## How It Works
14
15BeeGPT uses OpenAI's API to generate responses and images, but adds a bee-themed personality layer through prompt engineering. The system includes:
16
171. A backend API that communicates with OpenAI
182. A bee-themed prompt that instructs the AI to respond with bee-related content
193. A bee-themed image generator that enhances prompts with bee elements
35
36- Built on Val Town
37- Uses OpenAI's GPT models (gpt-4o-mini for chat)
38- Uses OpenAI's DALL-E 3 for image generation
39- Frontend built with HTML, CSS, and vanilla JavaScript
40- Styled with Tailwind CSS via CDN
49## Environment Variables
50
51This project requires an OpenAI API key to be set in your Val Town environment variables.
52
53## License

vNextmain.ts14 matches

@salon•Updated 3 weeks ago
24 timestamp: string;
25 level: LogLevel;
26 component: string; // e.g., "Tool:OpenAI", "CustomFunc:ValidateData"
27 message: string;
28 details?: string; // Stringified JSON for complex objects
90// For this example, they are functions called by a central router.
91
92// Using Val Town's std/fetch and std/openai
93import { fetch } from "https://esm.town/v/std/fetch";
94import { OpenAI } from "https://esm.town/v/std/openai";
95
96// 1. HTTP Fetch Tool Endpoint
151}
152
153// 2. OpenAI Call Tool Endpoint
154async function handleOpenAICall(
155 reqPayload: {
156 messages: Array<{ role: "system" | "user" | "assistant"; content: string }>;
167 max_tokens,
168 } = reqPayload;
169 logger.log("INFO", `Making OpenAI call to ${model}`, {
170 messageCount: messages.length,
171 });
172 try {
173 const openai = new OpenAI(); // Assumes OPENAI_API_KEY is in environment
174 const completion = await openai.chat.completions.create({
175 model,
176 messages,
178 ...(max_tokens !== undefined && { max_tokens }),
179 });
180 logger.log("SUCCESS", "OpenAI call successful.", {
181 modelUsed: completion.model,
182 });
183 return completion;
184 } catch (e: any) {
185 logger.log("ERROR", "OpenAI API call failed.", e);
186 throw e;
187 }
609 responsePayload = await handleHttpFetch(payload, logger);
610 break;
611 case "openai_call":
612 responsePayload = await handleOpenAICall(payload, logger);
613 break;
614 case "string_template":
774 {
775 "id": "clean_petal_width_llm",
776 "endpoint": "/api/tools/openai_call",
777 "description": "LLM cleaning for 'petal.width'",
778 "inputs": {
815 {
816 "id": "insights_llm",
817 "endpoint": "/api/tools/openai_call",
818 "description": "Get LLM insights on summary",
819 "inputs": {

GitHub-trending-summarysummarize-to-email3 matches

@buzz_code•Updated 3 weeks ago
1import { email } from "https://esm.town/v/std/email";
2import { OpenAI } from "https://esm.town/v/std/openai";
3import { JSDOM } from "npm:jsdom";
4import { NodeHtmlMarkdown } from "npm:node-html-markdown";
16 );
17
18 const openai = new OpenAI();
19 console.log(trendingMarkdown);
20
21 const completion = await openai.chat.completions.create({
22 messages: [
23 {

parallelmain.ts40 matches

@salon•Updated 3 weeks ago
1// superpowered_agent_platform_v3_plus_leadgen.ts
2// Declarative tools, parallel execution, Val Town idiomatic OpenAI & HTTP Fetch tools.
3// Includes original v3 demo AND new lead generation workflow.
4
5import { fetch } from "https://esm.town/v/std/fetch";
6import { OpenAI } from "https://esm.town/v/std/openai";
7
8// --- Core Interfaces & Logger ---
94
95// --- Tool Definitions & Registry ---
96type ToolType = "http_fetch" | "openai_call" | "string_template" | "custom_js_function";
97
98interface BaseToolConfig {}
102 staticUrl?: string;
103}
104interface OpenAiCallToolConfig extends BaseToolConfig {
105 defaultModel?: string;
106}
522};
523
524interface OpenAiDynamicParams {
525 prompt?: string;
526 messages?: Array<{ role: "system" | "user" | "assistant"; content: string }> | string;
530}
531
532const openAiCallToolHandler: ToolFunction<OpenAiDynamicParams, { result: any }, OpenAiCallToolConfig> = async (
533 params,
534 staticConfig,
536) => {
537 const { mandateId, taskId, log } = context;
538 let openaiClient: OpenAI;
539 try {
540 openaiClient = new OpenAI();
541 }
542 catch (e: any) {
543 log("ERROR", "OpenAiCallTool", "Failed to init OpenAI client.", { originalError: e.message });
544 return { mandateId, correlationId: taskId, payload: { result: null }, error: "OpenAI client init failed." };
545 }
546
558 messagesPayload = [{ role: "user", content: params.prompt }];
559 } else {
560 log("ERROR", "OpenAiCallTool", "Input required: 'messages' (array/string) or 'prompt' (string).");
561 return { mandateId, correlationId: taskId, payload: { result: null }, error: "Missing OpenAI input." };
562 }
563
567 apiRequestBody.max_tokens = Math.floor(params.max_tokens);
568
569 log("INFO", "OpenAiCallTool", "Making OpenAI chat completion call.", {
570 model,
571 messagesCount: messagesPayload.length,
572 });
573 log("DEBUG", "OpenAiCallTool", `Messages payload preview:`, {
574 firstMessageContent: messagesPayload[0]?.content.substring(0, 50) + "...",
575 });
576
577 try {
578 const completion = await openaiClient.chat.completions.create(apiRequestBody);
579 if (!completion?.choices?.length) {
580 log("WARN", "OpenAiCallTool", "OpenAI response empty/unexpected.", { response: completion });
581 }
582 log("SUCCESS", "OpenAiCallTool", "OpenAI call successful.", { modelUsed: completion?.model });
583 return { mandateId, correlationId: taskId, payload: { result: completion } };
584 } catch (e: any) {
585 log("ERROR", "OpenAiCallTool", "OpenAI API call failed.", e);
586 const errMsg = e.response?.data?.error?.message || e.error?.message || e.message || "Unknown OpenAI API error";
587 return { mandateId, correlationId: taskId, payload: { result: null }, error: errMsg };
588 }
772
773 const finalLeads: { name: string; website: string; email: string; draftedEmail: string }[] = [];
774 let openaiClient: OpenAI;
775 try {
776 openaiClient = new OpenAI();
777 }
778 catch (e: any) {
779 log("ERROR", "DraftEmails", "Failed to init OpenAI client.", e);
780 return { mandateId, correlationId: taskId, payload: { finalLeads: [] }, error: "OpenAI client init failed." };
781 }
782
791Goal: pique interest for follow-up. Concise (under 100 words). Email body text only.`;
792 const messages = [{ role: "user" as const, content: prompt }];
793 log("DEBUG", "DraftEmails", "Sending prompt to OpenAI...", { promptPreview: prompt.substring(0, 100) + "..." });
794
795 const completion = await openaiClient.chat.completions.create({
796 model: "gpt-4o-mini",
797 messages,
802
803 if (draftedEmail === "[Error generating email]" || !draftedEmail) {
804 log("WARN", "DraftEmails", `OpenAI no usable content for ${lead.name}.`, { response: completion });
805 finalLeads.push({ ...lead, draftedEmail: "[Failed to generate email content]" });
806 } else {
810 } catch (error: any) {
811 log("ERROR", "DraftEmails", `Failed for ${lead.name}: ${error.message}`, error);
812 finalLeads.push({ ...lead, draftedEmail: "[OpenAI API call failed]" });
813 }
814 await delay(300); // Small delay between OpenAI calls
815 }
816 log("SUCCESS", "DraftEmails", `Email drafting complete for ${finalLeads.length} leads processed.`);
821const enhancedAnalysisWorkflowV3: WorkflowDefinition = {
822 id: "enhancedAnalysisV3",
823 description: "Fetches data, generates a summary (OpenAI or legacy), and combines results.",
824 initialInputSchema: { required: ["userText", "userName"] },
825 steps: [
832 },
833 {
834 id: "step2a_summarize_openai",
835 toolType: "openai_call",
836 parameters: {
837 messages: { source: "initial", field: "userText" },
838 temperature: { source: "initial", field: "summaryConfig.temperature" },
839 },
840 condition: { source: "initial", field: "config.useOpenAISummarizer" },
841 },
842 {
856 parameters: {
857 userName: { source: "initial", field: "userName" },
858 aiSummary: { source: "step2a_summarize_openai", field: "result.choices[0].message.content" },
859 legacySummary: { source: "step2b_summarize_legacy", field: "summary" },
860 fetchTitle: { source: "step1_fetchData", field: "data.title" },
861 },
862 dependencies: ["step1_fetchData", "step2a_summarize_openai", "step2b_summarize_legacy"],
863 },
864 ],
865 outputMapping: {
866 finalResult: { source: "step3_combine", field: "result" },
867 aiModelUsed: { source: "step2a_summarize_openai", field: "result.model" },
868 fetchedDataStatus: { source: "step1_fetchData", field: "status" },
869 },
929
930toolRegistry.registerTool("http_fetch", httpFetchToolHandler);
931toolRegistry.registerTool("openai_call", openAiCallToolHandler);
932toolRegistry.registerTool("string_template", stringTemplateToolHandler);
933toolRegistry.registerTool("custom_js_function", async (params, staticConfig: CustomJsFunctionToolConfig, context) => {
1058 <label for="userText">Text for summarization (Required):</label><textarea id="userText" name="userText" required rows="3">JWST images show distant galaxies.</textarea>
1059 <fieldset><legend>Summarizer Choice</legend><div class="radio-group">
1060 <label><input type="radio" name="summarizerChoice" value="openai" checked> OpenAI</label>
1061 <label><input type="radio" name="summarizerChoice" value="legacy"> Legacy</label>
1062 </div></fieldset>
1075 const workflowStatusBox = document.getElementById('workflowStatusBox');
1076 const workflowDefinitionClient = { id: "enhancedAnalysisV3", steps: [
1077 { id: "step1_fetchData", description: "Fetch Data" }, { id: "step2a_summarize_openai", description: "Summarize (OpenAI)" },
1078 { id: "step2b_summarize_legacy", description: "Summarize (Legacy)" }, { id: "step3_combine", description: "Combine Results" }
1079 ]};
1090 customFetchUrl: fd.get('customFetchUrl') || undefined,
1091 useLegacySummarizer: choice === 'legacy',
1092 config: { useOpenAISummarizer: choice === 'openai' },
1093 };
1094 try {
1135 const workflowDefinitionClient = { id: "leadGenWorkflowV1", steps: [
1136 { id: "step_search", description: "Simulate Search" }, { id: "step_scrape_emails", description: "Scrape Emails" },
1137 { id: "step_draft_emails", description: "Draft Emails (OpenAI)" }, { id: "step_format_summary", description: "Format Summary" }
1138 ]};
1139 ${CLIENT_SIDE_UI_SCRIPT_CONTENT}

my-first-val03_cron.tsx5 matches

@stevekrouse•Updated 3 weeks ago
5// ---------------- Val Town Standard Library ----------------
6// Val Town provides limited free hosted services, including
7// functions for sending emails and using OpenAI
8import { email } from "https://esm.town/v/std/email";
9import { OpenAI } from "https://esm.town/v/std/OpenAI";
10
11// --------------------- Get weather data --------------------
21
22export default async function() {
23 // Use OpenAI provided by Val Town to get weather reccomendation
24 // Experiment with changing the prompt
25 const openai = new OpenAI();
26 let chatCompletion = await openai.chat.completions.create({
27 messages: [{
28 role: "user",

Townie-02system_prompt.txt4 matches

@jxnblk•Updated 3 weeks ago
88Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
89
90### OpenAI
91
92```ts
93import { OpenAI } from "https://esm.town/v/std/openai";
94const openai = new OpenAI();
95const completion = await openai.chat.completions.create({
96 messages: [
97 { role: "user", content: "Say hello in a creative way" },

Towniesystem_prompt.txt4 matches

@devdoshi•Updated 3 weeks ago
88Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
89
90### OpenAI
91
92```ts
93import { OpenAI } from "https://esm.town/v/std/openai";
94const openai = new OpenAI();
95const completion = await openai.chat.completions.create({
96 messages: [
97 { role: "user", content: "Say hello in a creative way" },

reelMitrajsREADME.md2 matches

@Omyadav•Updated 3 weeks ago
2 const prompt = `Generate a short 15-second Instagram reel script and 5 viral hashtags for the topic: "${topic}". Make the script engaging and desi style.`;
3
4 const res = await fetch("https://api.openai.com/v1/chat/completions", {
5 method: "POST",
6 headers: {
7 "Authorization": `Bearer ${Deno.env.get("OPENAI_API_KEY")}`,
8 "Content-Type": "application/json",
9 },

untitled-509README.md2 matches

@Omyadav•Updated 3 weeks ago
2 const prompt = `Generate a short 15-second Instagram reel script and 5 viral hashtags for the topic: "${topic}". Make the script engaging and desi style.`;
3
4 const res = await fetch("https://api.openai.com/v1/chat/completions", {
5 method: "POST",
6 headers: {
7 "Authorization": `Bearer ${Deno.env.get("OPENAI_API_KEY")}`,
8 "Content-Type": "application/json",
9 },

openai-client1 file match

@cricks_unmixed4u•Updated 3 days ago

openai_enrichment6 file matches

@stevekrouse•Updated 5 days ago
kwhinnery_openai
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp