Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$2?q=openai&page=156&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 2147 results for "openai"(809ms)

smallweb_openapi_guidemain.tsx12 matches

@all•Updated 8 months ago
231 <button class="collapsible-button">Overview</button>
232 <div class="collapsible-content">
233 <p>This schema defines several components that can be used to integrate OpenAI's services into a logging and configuration system. Here's how each component relates to potential OpenAI use cases:</p>
234 <ul>
235 <li><strong>App</strong>: Represents an OpenAI-powered application with a name and URL.</li>
236 <li><strong>Config</strong>: Defines configuration options for OpenAI API integration and application settings.</li>
237 <li><strong>ConsoleLog</strong>: Captures console output from OpenAI model interactions and application processes.</li>
238 <li><strong>CronLog</strong>: Logs scheduled tasks related to OpenAI operations, such as model fine-tuning or dataset updates.</li>
239 <li><strong>HttpLog</strong>: Records HTTP requests made to and from the OpenAI API.</li>
240 </ul>
241 </div>
243
244 <div class="collapsible">
245 <button class="collapsible-button">Key Components and OpenAI Use Cases</button>
246 <div class="collapsible-content">
247 <dl>
255 <dt>Config</dt>
256 <dd>
257 Use Case: Store OpenAI API keys, model preferences, and application settings.
258 <br>
259 Example: Configure the GPT model to use, set token limits, and specify custom domains for AI services.
278 Use Case: Monitor and analyze API usage and performance.
279 <br>
280 Example: Track rate limits, response times, and payload sizes for OpenAI API calls.
281 </dd>
282 </dl>
294 <li><strong>Usage Analytics</strong>: Analyze HttpLog data to gain insights into API usage patterns, popular features, and potential areas for optimization or scaling.</li>
295 </ul>
296 <p>By implementing this schema, developers can create robust, scalable applications that effectively integrate and manage OpenAI's powerful AI capabilities while maintaining comprehensive logging and configuration control.</p>
297 </div>
298 </div>
312 <meta charset="UTF-8">
313 <meta name="viewport" content="width=device-width, initial-scale=1.0">
314 <title>OpenAI Integration Schema Guide</title>
315 <style>
316 body {
371 </head>
372 <body>
373 <h1>OpenAI Integration Schema Guide</h1>
374 ${guide}
375

chatmain.tsx6 matches

@lt_07•Updated 8 months ago
5 options = {},
6) => {
7 // Initialize OpenAI API stub
8 const { Configuration, OpenAIApi } = await import(
9 "https://esm.sh/openai@3.3.0"
10 );
11 const configuration = new Configuration({
12 apiKey: process.env.OPENAI,
13 });
14 const openai = new OpenAIApi(configuration);
15 // Request chat completion
16 const messages = typeof prompt === "string"
17 ? [{ role: "user", content: prompt }]
18 : prompt;
19 const { data } = await openai.createChatCompletion({
20 model: "gpt-3.5-turbo-0613",
21 messages,

valleBlogV0main.tsx3 matches

@demon•Updated 8 months ago
3import { passwordAuth } from "https://esm.town/v/pomdtr/password_auth?v=84";
4import { verifyToken } from "https://esm.town/v/pomdtr/verifyToken?v=1";
5import { openai } from "npm:@ai-sdk/openai";
6import ValTown from "npm:@valtown/sdk";
7import { streamText } from "npm:ai";
36
37 const stream = await streamText({
38 model: openai("gpt-4o", {
39 baseURL: "https://std-openaiproxy.web.val.run/v1",
40 apiKey: Deno.env.get("valtown"),
41 } as any),

getModelBuildermain.tsx14 matches

@lt_07•Updated 8 months ago
3export async function getModelBuilder(spec: {
4 type?: "llm" | "chat" | "embedding";
5 provider?: "openai" | "huggingface";
6} = { type: "llm", provider: "openai" }, options?: any) {
7 const { extend, cond, matches, invoke } = await import("npm:lodash-es");
8 // Set up LangSmith tracer
17 // Set up API key for each providers
18 const args = extend({ callbacks }, options);
19 if (spec?.provider === "openai")
20 args.openAIApiKey = process.env.OPENAI;
21 else if (spec?.provider === "huggingface")
22 args.apiKey = process.env.HUGGINGFACE;
24 const setup = cond([
25 [
26 matches({ type: "llm", provider: "openai" }),
27 async () => {
28 const { OpenAI } = await import("npm:langchain/llms/openai");
29 return new OpenAI(args);
30 },
31 ],
32 [
33 matches({ type: "chat", provider: "openai" }),
34 async () => {
35 const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
36 return new ChatOpenAI(args);
37 },
38 ],
39 [
40 matches({ type: "embedding", provider: "openai" }),
41 async () => {
42 const { OpenAIEmbeddings } = await import(
43 "npm:langchain/embeddings/openai"
44 );
45 return new OpenAIEmbeddings(args);
46 },
47 ],

getModelBuilder_deleted_1727331494main.tsx14 matches

@lt_07•Updated 8 months ago
3export async function getModelBuilder(spec: {
4 type?: "llm" | "chat" | "embedding";
5 provider?: "openai" | "huggingface";
6} = { type: "llm", provider: "openai" }, options?: any) {
7 const { extend, cond, matches, invoke } = await import("npm:lodash-es");
8 // Set up LangSmith tracer
17 // Set up API key for each providers
18 const args = extend({ callbacks }, options);
19 if (spec?.provider === "openai")
20 args.openAIApiKey = process.env.OPENAI;
21 else if (spec?.provider === "huggingface")
22 args.apiKey = process.env.HUGGINGFACE;
24 const setup = cond([
25 [
26 matches({ type: "llm", provider: "openai" }),
27 async () => {
28 const { OpenAI } = await import("npm:langchain/llms/openai");
29 return new OpenAI(args);
30 },
31 ],
32 [
33 matches({ type: "chat", provider: "openai" }),
34 async () => {
35 const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
36 return new ChatOpenAI(args);
37 },
38 ],
39 [
40 matches({ type: "embedding", provider: "openai" }),
41 async () => {
42 const { OpenAIEmbeddings } = await import(
43 "npm:langchain/embeddings/openai"
44 );
45 return new OpenAIEmbeddings(args);
46 },
47 ],

bedtimeStoryMakermain.tsx1 match

@dthyresson•Updated 8 months ago
19import { generateOpenGraphTags, OpenGraphData } from "https://esm.town/v/dthyresson/generateOpenGraphTags";
20import { ValTownLink } from "https://esm.town/v/dthyresson/viewOnValTownComponent";
21import { chat } from "https://esm.town/v/stevekrouse/openai";
22import * as fal from "npm:@fal-ai/serverless-client";
23

free_open_routermain.tsx2 matches

@taras•Updated 8 months ago
155 },
156 {
157 url: "https://api.groq.com/openai/v1/models",
158 token: Deno.env.get("GROQ_API_KEY"),
159 },
298 if (provider === "groq") {
299 url.host = "api.groq.com";
300 url.pathname = url.pathname.replace("/api/v1", "/openai/v1");
301 url.port = "443";
302 url.protocol = "https";

add_to_notion_w_aiREADME.md1 match

@eyeseethru•Updated 8 months ago
14Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email
15
16- Uses `NOTION_API_KEY`, `OPENAI_API_KEY` stored in env variables and uses [Valtown blob storage](https://esm.town/v/std/blob) to store information about the database.
17- Use `get_notion_db_info` to use the stored blob if exists or create one, use `get_and_save_notion_db_info` to create a new blob (and replace an existing one if exists).

add_to_notion_w_aimain.tsx3 matches

@eyeseethru•Updated 8 months ago
3import Instructor from "npm:@instructor-ai/instructor";
4import { Client } from "npm:@notionhq/client";
5import OpenAI from "npm:openai";
6import { z } from "npm:zod";
7
26};
27
28const oai = new OpenAI({
29 apiKey: process.env.OPENAI_API_KEY ?? undefined,
30});
31

InventionDetailstoJSONConvertermain.tsx3 matches

@willthereader•Updated 8 months ago
133 if (request.method === "POST" && new URL(request.url).pathname === "/convert") {
134 try {
135 const { OpenAI } = await import("https://esm.town/v/std/openai");
136 const { blob } = await import("https://esm.town/v/std/blob");
137 const openai = new OpenAI();
138
139 const body = await request.json();
192
193 try {
194 const completion = await openai.chat.completions.create({
195 messages: [
196 {

openai-client1 file match

@cricks_unmixed4u•Updated 3 days ago

openai_enrichment6 file matches

@stevekrouse•Updated 4 days ago
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp
kwhinnery_openai