Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$1?q=openai&page=174&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=openai

Returns an array of strings in format "username" or "username/projectName"

Found 1970 results for "openai"(2497ms)

openAiFreeUsagemain.tsx1 match

@patrickjm•Updated 1 year ago
1// set at Sat Dec 09 2023 01:45:57 GMT+0000 (Coordinated Universal Time)
2export let openAiFreeUsage = {"used_quota":12709400,"used_quota_usd":1.27094,"exceeded":false};

weatherGPTmain.tsx3 matches

@treb0r•Updated 1 year ago
1import { email } from "https://esm.town/v/std/email?v=11";
2import { fetch } from "https://esm.town/v/std/fetch";
3import { OpenAI } from "npm:openai";
4
5let location = "Halifax UK";
9).then(r => r.json());
10
11const openai = new OpenAI();
12let chatCompletion = await openai.chat.completions.create({
13 messages: [{
14 role: "user",

weatherGPTREADME.md1 match

@treb0r•Updated 1 year ago
1If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
2
3

weatherGPTmain.tsx3 matches

@ellenchisa•Updated 1 year ago
1import { email } from "https://esm.town/v/std/email?v=11";
2import { fetch } from "https://esm.town/v/std/fetch";
3import { OpenAI } from "npm:openai";
4
5let location = "san francisco ca";
9).then(r => r.json());
10
11const openai = new OpenAI();
12let chatCompletion = await openai.chat.completions.create({
13 messages: [{
14 role: "user",

weatherGPTREADME.md1 match

@ellenchisa•Updated 1 year ago
1If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
2
3

annoymain.tsx2 matches

@rcurtiss•Updated 1 year ago
50 `;
51 console.log({ prompt });
52 const response = await fetch("https://api.openai.com/v1/completions", {
53 method: "POST",
54 headers: {
55 "Content-Type": "application/json",
56 "Authorization": "Bearer " + process.env.OPENAI_API_KEY, // Replace with your OpenAI API Key
57 },
58 body: JSON.stringify({

modelSampleChatCallmain.tsx1 match

@bluemsn•Updated 1 year ago
4 const builder = await getModelBuilder({
5 type: "chat",
6 provider: "openai",
7 });
8 const model = await builder();

getVectorStoreBuildermain.tsx1 match

@bluemsn•Updated 1 year ago
5 type: "memory" | "baas";
6 provider?: "pinecone" | "milvus";
7} = { type: "memory" }, embed: "openai" | "huggingface" = "openai") {
8 const { cond, matches } = await import("npm:lodash-es");
9 const builder = await getModelBuilder({

getModelBuildermain.tsx14 matches

@bluemsn•Updated 1 year ago
3export async function getModelBuilder(spec: {
4 type?: "llm" | "chat" | "embedding";
5 provider?: "openai" | "huggingface";
6} = { type: "llm", provider: "openai" }, options?: any) {
7 const { extend, cond, matches, invoke } = await import("npm:lodash-es");
8
20 const args = extend({ callbacks }, options);
21
22 if (spec?.provider === "openai")
23 args.openAIApiKey = process.env.OPENAI_API_KEY;
24 else if (spec?.provider === "huggingface")
25 args.apiKey = process.env.HUGGINGFACE;
27 const setup = cond([
28 [
29 matches({ type: "llm", provider: "openai" }),
30 async () => {
31 const { OpenAI } = await import("npm:langchain/llms/openai");
32 return new OpenAI(args);
33 },
34 ],
35 [
36 matches({ type: "chat", provider: "openai" }),
37 async () => {
38 const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
39 return new ChatOpenAI(args);
40 },
41 ],
42 [
43 matches({ type: "embedding", provider: "openai" }),
44 async () => {
45 const { OpenAIEmbeddings } = await import(
46 "npm:langchain/embeddings/openai"
47 );
48 return new OpenAIEmbeddings(args);
49 },
50 ],

chatmain.tsx5 matches

@bluemsn•Updated 1 year ago
5 options = {},
6) => {
7 // Initialize OpenAI API stub
8 const { OpenAI } = await import(
9 "https://esm.sh/openai"
10 );
11 const openai = new OpenAI();
12 const messages = typeof prompt === "string"
13 ? [{ role: "user", content: prompt }]
14 : prompt;
15
16 const completion = await openai.chat.completions.create({
17 messages: messages,
18 model: "gpt-3.5-turbo",

openaiproxy2 file matches

@skutaans•Updated 4 days ago

token-server1 file match

@kwhinnery_openai•Updated 5 days ago
Mint tokens to use with the OpenAI Realtime API for WebRTC
reconsumeralization
import { OpenAI } from "https://esm.town/v/std/openai"; import { sqlite } from "https://esm.town/v/stevekrouse/sqlite"; /** * Practical Implementation of Collective Content Intelligence * Bridging advanced AI with collaborative content creation */ exp
kwhinnery_openai