Val Town Code SearchReturn to Val Town

API Access

You can access search results via JSON API by adding format=json to your query:

https://codesearch.val.run/$%7Bart_info.art.src%7D?q=api&page=1233&format=json

For typeahead suggestions, use the /typeahead endpoint:

https://codesearch.val.run/typeahead?q=api

Returns an array of strings in format "username" or "username/projectName"

Found 19337 results for "api"(2843ms)

frameapod.ts3 matches

@charmaineโ€ขUpdated 2 months ago
11};
12
13const NASA_API_KEY = Deno.env.get("NASA_API_KEY");
14
15export async function GetAPOD(req: Request): Promise<Response> {
16 const url = `https://api.nasa.gov/planetary/apod?api_key=${NASA_API_KEY}&thumbs=true`;
17 const cacheKey = "nasa_apod";
18 const cacheMinutes = 60;
21
22 if (!data) {
23 return new Response("No data from Nasa API", { status: 404 });
24 }
25

groqAudioChatmain.tsx48 matches

@yawnxyzโ€ขUpdated 2 months ago
6import "jsr:@std/dotenv/load"; // needed for deno run; not req for smallweb or valtown
7
8// Function to handle audio transcription using Groq's Whisper API
9export const audioTranscriptionHandler = async (c) => {
10 console.log("๐ŸŽค Audio transcription request received");
20 }
21
22 // Get API key from environment variable
23 const apiKey = Deno.env.get("GROQ_API_KEY");
24 if (!apiKey) {
25 console.error("โŒ Transcription error: Missing API key");
26 return c.json({ error: "API key not configured" }, 500);
27 }
28
38
39 // If the file doesn't have a proper name or type, add one
40 // This ensures the file has the right extension for the API
41 if (!audioFile.name || !audioFile.type.startsWith('audio/')) {
42 const newFile = new File(
50 }
51
52 // Prepare the form data for Groq API
53 const groqFormData = new FormData();
54
65 groqFormData.append("timestamp_granularities[]", "word");
66
67 // Call Groq API
68 console.log("๐ŸŽค Sending request to Groq Whisper API");
69 const start = Date.now();
70 const response = await fetch("https://api.groq.com/openai/v1/audio/transcriptions", {
71 method: "POST",
72 headers: {
73 "Authorization": `Bearer ${apiKey}`
74 },
75 body: groqFormData
76 });
77 const elapsed = Date.now() - start;
78 console.log(`๐ŸŽค Groq Whisper API response received in ${elapsed}ms, status: ${response.status}`);
79
80 // Get response content type
99 errorMessage = `Server error: ${response.status} ${response.statusText}`;
100 // Log the full response for debugging
101 console.error("โŒ Transcription API error response:", {
102 status: response.status,
103 statusText: response.statusText,
108 }
109 } catch (parseError) {
110 console.error("โŒ Error parsing Groq API response:", parseError);
111 errorMessage = "Failed to parse error response from server";
112 }
113
114 return c.json({
115 error: `Groq API error: ${errorMessage}`,
116 status: response.status
117 }, response.status);
150 console.log(`๐Ÿ”ต Last user message: "${messages.find(m => m.role === 'user')?.content?.substring(0, 50)}..."`);
151
152 const GROQ_API_KEY = Deno.env.get("GROQ_API_KEY");
153 if (!GROQ_API_KEY) {
154 console.error("โŒ Missing GROQ_API_KEY environment variable");
155 return c.json({ error: "GROQ_API_KEY environment variable is not set" }, 500);
156 }
157
158 console.log("๐Ÿ”ต Sending request to Groq API");
159 const start = Date.now();
160 const response = await fetch("https://api.groq.com/openai/v1/chat/completions", {
161 method: "POST",
162 headers: {
163 "Content-Type": "application/json",
164 "Authorization": `Bearer ${GROQ_API_KEY}`
165 },
166 body: JSON.stringify({
171 });
172 const elapsed = Date.now() - start;
173 console.log(`๐Ÿ”ต Groq API response received in ${elapsed}ms, status: ${response.status}`);
174
175 if (!response.ok) {
176 const errorData = await response.json();
177 console.error("โŒ Chat API error:", errorData);
178 return c.json({ error: "Failed to get chat completion", details: errorData }, response.status);
179 }
204 }
205
206 // Get API key from environment variable
207 const apiKey = Deno.env.get("GROQ_API_KEY");
208 if (!apiKey) {
209 console.error("โŒ TTS error: Missing API key");
210 return c.json({ error: "API key not configured" }, 500);
211 }
212
213 // Call Groq Speech API
214 console.log("๐Ÿ”Š Sending request to Groq Speech API");
215 const start = Date.now();
216 const response = await fetch("https://api.groq.com/openai/v1/audio/speech", {
217 method: "POST",
218 headers: {
219 "Content-Type": "application/json",
220 "Authorization": `Bearer ${apiKey}`
221 },
222 body: JSON.stringify({
228 });
229 const elapsed = Date.now() - start;
230 console.log(`๐Ÿ”Š Groq Speech API response received in ${elapsed}ms, status: ${response.status}`);
231
232 if (!response.ok) {
235 const errorData = await response.json();
236 errorMessage = errorData.error?.message || JSON.stringify(errorData);
237 console.error("โŒ TTS API error:", errorData);
238 } catch (e) {
239 // If response is not JSON
240 errorMessage = `Server error: ${response.status} ${response.statusText}`;
241 console.error("โŒ TTS API non-JSON error:", errorMessage);
242 }
243
601 // Now immediately send this message to get AI response
602 try {
603 // Prepare messages for the API
604 const apiMessages = this.messages.map(({ role, content }) => ({ role, content }));
605
606 // Ensure first message is always the correct system message for current mode
607 if (apiMessages.length > 0 && apiMessages[0].role === 'system') {
608 const systemMessage = this.chatMode === 'concise'
609 ? 'You are a helpful assistant powered by the Llama-3.3-70b-versatile model. Keep your responses short, concise and conversational. Aim for 1-3 sentences when possible.'
610 : 'You are a helpful assistant powered by the Llama-3.3-70b-versatile model. Respond conversationally and accurately to the user.';
611
612 apiMessages[0].content = systemMessage;
613 }
614
616 method: 'POST',
617 headers: { 'Content-Type': 'application/json' },
618 body: JSON.stringify({ messages: apiMessages })
619 });
620
679 this.statusMessage = 'Thinking...';
680
681 // Prepare messages for the API (excluding UI-only properties)
682 const apiMessages = this.messages.map(({ role, content }) => ({ role, content }));
683
684 // Ensure first message is always the correct system message for current mode
685 if (apiMessages.length > 0 && apiMessages[0].role === 'system') {
686 const systemMessage = this.chatMode === 'concise'
687 ? 'You are a helpful assistant powered by the Llama-3.3-70b-versatile model. Keep your responses short, concise and conversational. Aim for 1-3 sentences when possible.'
688 : 'You are a helpful assistant powered by the Llama-3.3-70b-versatile model. Respond conversationally and accurately to the user.';
689
690 apiMessages[0].content = systemMessage;
691 }
692
695 method: 'POST',
696 headers: { 'Content-Type': 'application/json' },
697 body: JSON.stringify({ messages: apiMessages })
698 });
699
967
968 <p class="text-center text-sm text-gray-600 mt-4">
969 Powered by Llama-3.3-70b-versatile through Groq API. Audio transcription and speech synthesis provided by Groq. Text-to-speech provided through PlayHT. <a class="underline" href="https://console.groq.com/docs/speech-to-text" target="_blank" rel="noopener noreferrer">Documentation here</a>. <a class="underline" href="https://www.val.town/v/yawnxyz/groqAudioChat" target="_blank" rel="noopener noreferrer">Code here</a>
970 </p>
971 <div class="text-center text-sm text-gray-600 mt-4 w-full mx-auto">

groqAudioWordLevelmain.tsx15 matches

@yawnxyzโ€ขUpdated 2 months ago
6import "jsr:@std/dotenv/load"; // needed for deno run; not req for smallweb or valtown
7
8// Function to handle audio transcription using Groq's Whisper API
9export const audioTranscriptionHandler = async (c) => {
10 try {
17 }
18
19 // Get API key from environment variable
20 const apiKey = Deno.env.get("GROQ_API_KEY");
21 if (!apiKey) {
22 return c.json({ error: "API key not configured" }, 500);
23 }
24
25 // Prepare the form data for Groq API
26 const groqFormData = new FormData();
27
36 groqFormData.append("timestamp_granularities[]", "word");
37
38 // Call Groq API
39 const response = await fetch("https://api.groq.com/openai/v1/audio/transcriptions", {
40 method: "POST",
41 headers: {
42 "Authorization": `Bearer ${apiKey}`
43 },
44 body: groqFormData
65 errorMessage = `Server error: ${response.status} ${response.statusText}`;
66 // Log the full response for debugging
67 console.error("Groq API error response:", {
68 status: response.status,
69 statusText: response.statusText,
74 }
75 } catch (parseError) {
76 console.error("Error parsing Groq API response:", parseError);
77 errorMessage = "Failed to parse error response from server";
78 }
79
80 return c.json({
81 error: `Groq API error: ${errorMessage}`,
82 status: response.status
83 }, response.status);
990 <title>Audio Transcription with Word Timestamps</title>
991 <meta property="og:title" content="Audio Transcription with Word Timestamps" />
992 <meta property="og:description" content="Upload your audio and we'll transcribe it with word-level timestamps using Groq Whisper API" />
993 <meta name="twitter:card" content="summary_large_image" />
994 <meta name="twitter:title" content="Audio Transcription with Word Timestamps" />
995 <meta name="twitter:description" content="Upload your audio and we'll transcribe it with word-level timestamps using Groq Whisper API" />
996 <script src="https://cdn.tailwindcss.com"></script>
997 <script src="https://unpkg.com/dexie@3.2.2/dist/dexie.js"></script>
1256 </div>
1257 <p class="text-center text-sm text-gray-600 mt-4">
1258 Uses Groq Whisper API for fast and accurate speech-to-text transcription with word-level timestamps. <a class="underline" href="https://console.groq.com/docs/speech-to-text" target="_blank" rel="noopener noreferrer">Documentation here</a>. <a class="underline" href="https://www.val.town/v/yawnxyz/groqAudioWordLevel" target="_blank" rel="noopener noreferrer">Code here</a>
1259 </p>
1260 <div class="text-center text-sm text-gray-600 mt-4 w-full mx-auto">

OpenTownieuseProjectFiles.ts1 match

@arfanโ€ขUpdated 2 months ago
1import { useState, useEffect } from "https://esm.sh/react@18.2.0?dev";
2import { fetchProjectFiles } from "../utils/api.ts";
3
4interface UseProjectFilesProps {

OpenTownieuseChatLogic.ts4 matches

@arfanโ€ขUpdated 2 months ago
6 project: any;
7 branchId: string | undefined;
8 anthropicApiKey: string;
9 bearerToken: string;
10 selectedFiles: string[];
16 project,
17 branchId,
18 anthropicApiKey,
19 bearerToken,
20 selectedFiles,
35 status,
36 } = useChat({
37 api: "/api/send-message",
38 body: {
39 project,
40 branchId,
41 anthropicApiKey,
42 selectedFiles,
43 images: images

OpenTownieTODOs.md1 match

@arfanโ€ขUpdated 2 months ago
6- [ ] Rebuild as React Router?
7- [ ] opentownie as a pr bot
8- [ ] give it the ability to see its own client-side and server-side logs by building a middleware that shoves them into a SQL light database date and then give it a tool to access them or use our trpc API in that tool
9- [ ] do a browser use or screenshot thing to give it access to its own visual output
10- [ ] Have it default to creating a new branch off main and then embedding and iframe to the resulting http val and give you a link to a pr opening url

OpenTowniesystem_prompt.txt7 matches

@arfanโ€ขUpdated 2 months ago
98- For AI-generated images, use: `https://maxm-imggenurl.web.val.run/the-description-of-your-image`
99- **Storage:** DO NOT use the Deno KV module for storage
100- **Browser APIs:** DO NOT use the `alert()`, `prompt()`, or `confirm()` methods
101- **Weather Data:** Use open-meteo for weather data (doesn't require API keys) unless otherwise specified
102- **View Source:** Add a view source link with `import.meta.url.replace("esm.town", "val.town")` and include `target="_top"` attribute
103- **Error Debugging:** Add `<script src="https://esm.town/v/std/catch"></script>` to HTML to capture client-side errors
104- **Error Handling:** Only use try...catch when there's a clear local resolution; avoid catches that merely log or return 500s - let errors bubble up with full context
105- **Environment Variables:** Use `Deno.env.get('keyname')` and minimize their use - prefer APIs without keys
106- **Imports:** Use `https://esm.sh` for npm and Deno dependencies to ensure compatibility on server and browser
107- **Storage Strategy:** Only use backend storage if explicitly required; prefer simple static client-side sites
188
189### Backend (Hono) Best Practices
190- Hono is the recommended API framework (similar to Express, Flask, or Sinatra)
191- Main entry point should be `backend/index.ts`
192- **Static asset serving:** Use the utility functions to read and serve project files:
209 });
210 ```
211- Create RESTful API routes for CRUD operations
212- Be careful with error handling as Hono tends to swallow errors
213- Consider re-throwing errors to see full stack traces:
226- Use React 18.2.0 consistently in all imports and the `@jsxImportSource` pragma
227- Follow the React component pattern from the example project
228- Handle API calls properly with proper error catching
229
230### Database Patterns
257 - For files in the project, use `readFile` helpers
258
2595. **API Design:**
260 - `fetch` handler is the entry point for HTTP vals
261 - Run the Hono app with `export default app.fetch // This is the entry point for HTTP vals`

OpenTowniesoundEffects.ts4 matches

@arfanโ€ขUpdated 2 months ago
4
5/**
6 * Plays a bell sound notification using the Web Audio API
7 * @returns A Promise that resolves when the sound has started playing
8 */
13 const AudioContext = window.AudioContext || (window as any).webkitAudioContext;
14 if (!AudioContext) {
15 console.warn("Web Audio API not supported in this browser");
16 resolve();
17 return;
65
66/**
67 * Plays a simple notification sound using the Web Audio API
68 * This is a simpler, shorter bell sound
69 * @returns A Promise that resolves when the sound has started playing
75 const AudioContext = window.AudioContext || (window as any).webkitAudioContext;
76 if (!AudioContext) {
77 console.warn("Web Audio API not supported in this browser");
78 resolve();
79 return;

OpenTownieREADME.md11 matches

@arfanโ€ขUpdated 2 months ago
9- **File Browser**: Select specific files to include in the context window for more focused AI assistance
10- **Branch Management**: View, select, and create branches without leaving the app
11- **Cost Tracking**: See estimated API usage costs for each interaction
12- **Sound Notifications**: Get alerted when Claude finishes responding
13- **Mobile-Friendly**: Works on both desktop and mobile devices
15## How It Works
16
171. **Login**: Authenticate with your Val Town API token and Anthropic API key
182. **Select a Project**: Choose which Val Town project you want to work on
193. **Select Files**: Browse your project files and select which ones to include in the context window
25### Prerequisites
26
27- A Val Town account with API access
28- An Anthropic API key (Claude 3.7 Sonnet)
29
30### Setup
31
321. Visit the OpenTownie app
332. Enter your Val Town API token (with `projects:write` and `users:read` permissions)
343. Enter your Anthropic API key
354. Click "Login" to access your projects
36
47OpenTownie is built with:
48- React frontend with TypeScript
49- Hono API server backend
50- Tailwind CSS for styling
51- Web Audio API for sound notifications
52- AI SDK for Claude integration
53
54The application proxies requests to the Anthropic API and Val Town API, allowing Claude to view and edit your project files directly.
55
56## Privacy & Security
57
58- Your Val Town API token and Anthropic API key are stored locally in your browser
59- No data is stored on our servers
60- All communication with the APIs is done directly from your browser

OpenTownieProjects.tsx1 match

@arfanโ€ขUpdated 2 months ago
10
11async function loader({ bearerToken }: { bearerToken: string }) {
12 const data = await (await fetch("/api/projects-loader", {
13 headers: {
14 "Authorization": "Bearer " + bearerToken,

researchAgent2 file matches

@thesephistโ€ขUpdated 11 hours ago
This is a lightweight wrapper around Perplexity's web search API

memoryApiExample2 file matches

@ingenierotitoโ€ขUpdated 11 hours ago
apiry
snartapi