Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$%7Bart_info.art.src%7D?q=openai&page=107&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 1602 results for "openai"(1199ms)

VALLEREADME.md1 match

@trantion•Updated 8 months ago
6* Fork this val to your own profile.
7* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
8* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
9* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
10* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.

CoverLetterGeneratormain.tsx7 matches

@shawnbasquiat•Updated 8 months ago
1// This val creates a cover letter generator using OpenAI's GPT model
2// It takes a resume (as a PDF file) and job description as input and returns a concise cover letter
3
4import { OpenAI } from "https://esm.town/v/std/openai";
5
6export default async function server(req: Request): Promise<Response> {
111async function generateCoverLetter(resume: string, jobDescription: string): Promise<string> {
112 console.log("Entering generateCoverLetter function");
113 const openai = new OpenAI();
114 console.log("OpenAI instance created");
115
116 try {
117 const completion = await openai.chat.completions.create({
118 model: "gpt-4o-mini",
119 messages: [
138 });
139
140 console.log("OpenAI API call completed");
141 return completion.choices[0].message.content || "Unable to generate cover letter.";
142 } catch (error) {
143 console.error("Error in OpenAI API call:", error);
144 throw error;
145 }

longOliveGuppymain.tsx1 match

@sharanbabu•Updated 8 months ago
1// This chatbot app will use a simple React frontend to display messages and allow user input.
2// The backend will use OpenAI's GPT model to generate responses.
3// We'll use SQLite to store conversation history.
4

BlogChatbotServermain.tsx5 matches

@weaverwhale•Updated 8 months ago
1// This approach will create a chatbot using OpenAI's SDK with access to a blog's content stored in a JSON.
2// We'll use Hono for routing, OpenAI for the chatbot, and stream the responses to the client.
3// The blog content will be stored as JSON and used as a tool in the chatbot's chain.
4
5import { Hono } from "https://esm.sh/hono";
6import { streamSSE } from "https://esm.sh/hono/streaming";
7import { OpenAI } from "https://esm.town/v/std/openai";
8
9const app = new Hono();
114app.post("/chat", async (c) => {
115 const { message } = await c.req.json();
116 const openai = new OpenAI();
117
118 return streamSSE(c, async (stream) => {
119 const blogSearchResults = searchBlogContent(message);
120
121 const completion = await openai.chat.completions.create({
122 model: "gpt-4-mini",
123 messages: [

ministerialLavenderSlothmain.tsx3 matches

@maxm•Updated 8 months ago
1import { OpenAI } from "https://esm.town/v/std/openai";
2export default async function(req: Request): Promise<Response> {
3 const openai = new OpenAI();
4 const stream = await openai.chat.completions.create({
5 stream: true,
6 messages: [{ role: "user", content: "Write a poem in the style of beowulf about the DMV" }],

FindTrendsUsingGPTmain.tsx4 matches

@weaverwhale•Updated 8 months ago
3
4import { Hono } from "npm:hono";
5import { OpenAI } from "npm:openai";
6
7const trendGPT = async (data, onData) => {
8 const openai = new OpenAI();
9
10 // Start the OpenAI stream
11 const chatStream = await openai.chat.completions.create({
12 messages: [
13 {

VALLEREADME.md1 match

@hubingkang•Updated 8 months ago
6* Fork this val to your own profile.
7* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
8* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
9* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
10* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.

VALLEREADME.md1 match

@tgrv•Updated 8 months ago
6* Fork this val to your own profile.
7* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
8* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
9* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
10* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.

VALLEREADME.md1 match

@heaversm•Updated 8 months ago
6* Fork this val to your own profile.
7* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
8* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
9* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
10* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.

valleBlogV0main.tsx3 matches

@heaversm•Updated 8 months ago
3import { passwordAuth } from "https://esm.town/v/pomdtr/password_auth?v=84";
4import { verifyToken } from "https://esm.town/v/pomdtr/verifyToken?v=1";
5import { openai } from "npm:@ai-sdk/openai";
6import ValTown from "npm:@valtown/sdk";
7import { streamText } from "npm:ai";
36
37 const stream = await streamText({
38 model: openai("gpt-4o", {
39 baseURL: "https://std-openaiproxy.web.val.run/v1",
40 apiKey: Deno.env.get("valtown"),
41 } as any),

translateToEnglishWithOpenAI1 file match

@shlmt•Updated 22 hours ago

testOpenAI1 file match

@stevekrouse•Updated 2 days ago
lost1991
import { OpenAI } from "https://esm.town/v/std/openai"; export default async function(req: Request): Promise<Response> { if (req.method === "OPTIONS") { return new Response(null, { headers: { "Access-Control-Allow-Origin": "*",