Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$%7Bsuccess?q=openai&page=44&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 3218 results for "openai"(2401ms)

Townie.cursorrules4 matches

@charmaintest•Updated 1 month ago
94Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to create a fresh table.
95
96### OpenAI
97
98```ts
99import { OpenAI } from "https://esm.town/v/std/openai";
100const openai = new OpenAI();
101const completion = await openai.chat.completions.create({
102 messages: [
103 { role: "user", content: "Say hello in a creative way" },

chatterchatCompletion.js2 matches

@yawnxyz•Updated 1 month ago
3export async function groqChatCompletion(apiKey, payload) {
4 console.log('>>> [groqChatCompletion] Payload:', payload);
5 const response = await fetch('https://api.groq.com/openai/v1/chat/completions', {
6 method: 'POST',
7 headers: {
53 try {
54 const res = await groqChatCompletion(apiKey, {
55 model: 'openai/gpt-oss-120b',
56 messages: [
57 { role: 'system', content: 'Classify the user request as either links or text. Respond with a single token: links or text. Use links if the user appears to want a search results list of sources; use text if the user is asking for an explanation/summary/definition.' },

chatterconfig.js1 match

@yawnxyz•Updated 1 month ago
3
4export const settings = {
5 model: "openai/gpt-oss-120b",
6 stream: false,
7 reasoningEffort: "low",

momentummain.tsx5 matches

@join•Updated 1 month ago
1// @ts-ignore
2import { OpenAI } from "https://esm.town/v/std/openai?v=4";
3// @ts-ignore
4import { Hono } from "npm:hono@4.4.12";
833 debugLog("Final AI Payload:", userInput);
834
835 const openai = new OpenAI();
836 const completion = await openai.chat.completions.create({
837 model: "gpt-4o",
838 messages: [{ role: "system", content: PORTFOLIO_ANALYST_PROMPT }, {
883 ];
884
885 const openai = new OpenAI();
886 const response = await openai.chat.completions.create({
887 model: "gpt-4o",
888 messages: messages,
new-val-notifications

new-val-notificationsREADME.md3 matches

@charmaine•Updated 1 month ago
8- `main.tsx` - Main cron function and Discord webhook
9- `fetch-vals.ts` - Fetches and filters new vals using Val Town SDK
10- `ai-summarizer.ts` - Generates OpenAI summaries for vals without READMEs
11
12## Additional logic
16- Filter out commonly remixed utility vals like Blob Admin, SQLiteExplorer
17- Skip shallow remixes (forks with ≤3 versions)
18- Prioritize `main.tsx` files for AI summaries and limit code sent to OpenAI to 5KB to avoid token limits
19
20## Setup
21
221. Set `DISCORD_WEBHOOK` environment variable
232. Set `OPENAI_API_KEY` environment variable
243. Set `testMode = false` in main.tsx for production

Agentivemain.tsx15 matches

@join•Updated 1 month ago
2import { Hono } from "npm:hono@4.4.12";
3// @ts-ignore
4import { OpenAI } from "https://esm.town/v/std/openai";
5import type { Context } from "npm:hono@4.4.12";
6import { streamText } from "npm:hono/streaming";
618 if (!industry) return c.json({ error: "Industry is required" }, 400);
619 try {
620 const openai = new OpenAI();
621 const completion = await openai.chat.completions.create({
622 model: "gpt-4o",
623 messages: [{ role: "system", content: DYNAMIC_LIST_GENERATOR_PROMPT }, {
641 if (!occupation) return c.json({ error: "Occupation is required" }, 400);
642 try {
643 const openai = new OpenAI();
644 const completion = await openai.chat.completions.create({
645 model: "gpt-4o",
646 messages: [{ role: "system", content: DYNAMIC_LIST_GENERATOR_PROMPT }, {
667 }\n\nOccupation: ${occupation_title}\n\nTask: ${task}`;
668 try {
669 const openai = new OpenAI();
670 const completion = await openai.chat.completions.create({
671 model: "gpt-4o",
672 messages: [{ role: "system", content: PROMPT_REFINER_SYSTEM_PROMPT }, {
688 }
689 try {
690 const openai = new OpenAI();
691 const completion = await openai.chat.completions.create({
692 model: "gpt-4o",
693 messages: [{ role: "system", content: INPUT_EXTRACTOR_SYSTEM_PROMPT }, {
709 }
710 try {
711 const openai = new OpenAI();
712 const completion = await openai.chat.completions.create({
713 model: "gpt-4o",
714 messages: [
751
752 try {
753 const openai = new OpenAI();
754 const agentStream = await openai.chat.completions.create({
755 model: "gpt-4o",
756 messages: [{ role: "system", content: systemPromptWithContext }, {
776 if (!raw_text) return c.json({ error: "raw_text is required" }, 400);
777 try {
778 const openai = new OpenAI();
779 const htmlCompletion = await openai.chat.completions.create({
780 model: "gpt-4o-mini", // Using a faster model for simple formatting
781 messages: [{ role: "system", content: HTML_FORMATTER_SYSTEM_PROMPT }, {

synastrymain.tsx3 matches

@join•Updated 1 month ago
1// @ts-ignore
2import { OpenAI } from "https://esm.town/v/std/openai?v=4";
3
4// --- AI BEHAVIORAL GUIDELINES ---
461 }
462
463 const openai = new OpenAI();
464
465 const completion = await openai.chat.completions.create({
466 model: "gpt-4o",
467 messages: [

helical2main.tsx5 matches

@join•Updated 1 month ago
1// @ts-ignore
2import { OpenAI } from "https://esm.town/v/std/openai?v=4";
3
4// --- AI BEHAVIORAL GUIDELINES ---
477 if (req.method === "POST" && action === "getAstrology") {
478 try {
479 // The user wants to use OpenAI, so we instantiate it.
480 // NOTE: This requires the OPENAI_API_KEY environment variable to be set.
481 const openai = new OpenAI();
482 const { planetaryData } = await req.json();
483
493 }`;
494
495 const completion = await openai.chat.completions.create({
496 model: "gpt-4o",
497 messages: [

helicalmain.tsx5 matches

@real•Updated 1 month ago
1// @ts-ignore
2import { OpenAI } from "https://esm.town/v/std/openai?v=4";
3
4// --- AI BEHAVIORAL GUIDELINES ---
477 if (req.method === "POST" && action === "getAstrology") {
478 try {
479 // The user wants to use OpenAI, so we instantiate it.
480 // NOTE: This requires the OPENAI_API_KEY environment variable to be set.
481 const openai = new OpenAI();
482 const { planetaryData } = await req.json();
483
493 }`;
494
495 const completion = await openai.chat.completions.create({
496 model: "gpt-4o",
497 messages: [

lorrainemain.js5 matches

@yawnxyz•Updated 1 month ago
12// Settings configuration
13const settings = {
14 model: 'openai/gpt-oss-120b',
15 stream: false,
16 reasoningEffort: 'low'
387 // Optionally set language or prompt here
388
389 const resp = await fetch('https://api.groq.com/openai/v1/audio/transcriptions', {
390 method: 'POST',
391 headers: { 'Authorization': 'Bearer ' + apiKey },
526 language,
527 offset = 0,
528 model = 'openai/gpt-oss-120b',
529 reasoning_effort = 'medium',
530 tools = [{ type: 'browser_search' }],
653 language,
654 offset = 0,
655 model = 'openai/gpt-oss-120b',
656 reasoning_effort = 'medium',
657 tools = [{ type: 'browser_search' }],
719 start: async (controller) => {
720 try {
721 const upstream = await fetch("https://api.groq.com/openai/v1/chat/completions", {
722 method: 'POST',
723 headers: {

hello-realtime5 file matches

@jubertioai•Updated 2 days ago
Sample app for the OpenAI Realtime API

openai-gemini1 file match

@ledudu•Updated 1 week ago
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp
kwhinnery_openai