redditSearchmain.tsx5 matches
11interface RedditSearchOptions {
12query: string;
13apiKey?: string;
14}
1517export async function redditSearch({
18query,
19apiKey = Deno.env.get("BROWSERBASE_API_KEY"),
20}: RedditSearchOptions): Promise<ThreadResult[]> {
21if (!apiKey) {
22throw new Error("BrowserBase API key is required");
23}
2425const puppeteer = new PuppeteerDeno({ productName: "chrome" });
26const browser = await puppeteer.connect({
27browserWSEndpoint: `wss://connect.browserbase.com?apiKey=${apiKey}&enableProxy=true`,
28ignoreHTTPSErrors: true,
29});
12export default function handler(req) {
3return new Response(`"\n <!DOCTYPE html>\n <html>\n <head>\n <title>Claude Chat Conversation</title>\n <meta charset=\"UTF-8\">\n <style>\n body {\n font-family: system-ui, -apple-system, sans-serif;\n line-height: 1.5;\n max-width: 800px;\n margin: 0 auto;\n padding: 20px;\n background: #f9fafb;\n }\n .message {\n margin: 20px 0;\n padding: 15px;\n border-radius: 8px;\n }\n .human {\n background: #e5e7eb;\n }\n .assistant {\n background: #dbeafe;\n }\n .role {\n font-weight: bold;\n margin-bottom: 8px;\n }\n </style>\n </head>\n <body>\n \n <div class=\"message human\">\n <div class=\"role\">Human</div>\n <div class=\"content\">This is an engineering proposal. Convert that into a tech talk about 10 minutes long</div>\n </div>\n \n <div class=\"message assistant\">\n <div class=\"role\">Claude</div>\n <div class=\"content\">Tech Talk: Optimizing Data Storage with Symbol TablesClick to open document\nThis tech talk covers the key aspects of your data optimization project in a format suitable for a 10-minute presentation. Would you like me to adjust any specific section or add more technical details to any part?</div>\n </div>\n \n <div class=\"message human\">\n <div class=\"role\">Human</div>\n <div class=\"content\">Write the script for it</div>\n </div>\n \n <div class=\"message assistant\">\n <div class=\"role\">Claude</div>\n <div class=\"content\">Tech Talk Script: Optimizing Data Storage with Symbol TablesClick to open document\nThis script should take about 10 minutes to deliver at a comfortable speaking pace. It maintains technical depth while remaining accessible and includes natural transitions between sections. Would you like me to adjust any particular section or add more technical details anywhere?\nKey features of the script:\n\nConversational tone while maintaining technical accuracy\nClear narrative arc from problem to solution to results\nTechnical details balanced with practical implications\nSpecific examples and metrics to support key points\nNatural breaking points for slides/demos if needed</div>\n </div>\n \n <div class=\"message human\">\n <div class=\"role\">Human</div>\n <div class=\"content\">Draw a mermaid diagram to show data flows in a decision flow</div>\n </div>\n \n <div class=\"message assistant\">\n <div class=\"role\">Claude</div>\n <div class=\"content\">I'll create a Mermaid diagram showing how data flows through the system, including the new symbol table optimization.\nDecision Flow System ArchitectureClick to open diagram\nThe diagram shows the complete flow of data through our system:\n\nClient sends input data to Router\nRouter directs request to Runner\nRunner:\n\nExecutes the decision flow\nCollects node results\nApplies symbol table deduplication\n\n\nDecision message sent via EventBridge\nArchiver:\n\nProcesses the message\nStores data across three tables:\n\nSymbol Table (new)\nDecision Table\nData Table\n\n\n\n\nExporter retrieves data via Decision History API\n\nThe color coding helps distinguish:\n\nPink: Services\nBlue: Databases\nYellow: Message bus\n\nWould you like me to modify any part of the diagram or add more detail to any specific component?</div>\n </div>\n \n <div class=\"message human\">\n <div class=\"role\">Human</div>\n <div class=\"content\">I meant in the decision flow, as the data flow through the nodes, how it accumulates node results</div>\n </div>\n \n <div class=\"message assistant\">\n <div class=\"role\">Claude</div>\n <div class=\"content\">Ah, I'll create a Mermaid diagram showing how data flows through nodes in a decision flow and how node results accumulate, both in the original format and with symbol table optimization.\nDecision Flow Node Results AccumulationClick to open diagram\nThis diagram contrasts the two approaches:\n\nOriginal Storage (top):\n\nEach node stores complete copies of all data\nData accumulates and duplicates as it flows through nodes\nNode 3 ends up storing three copies of the same values\n\n\nSymbol Table Storage (bottom):\n\nUnique key-value pairs stored once in symbol table\nNodes only store indexes to the symbol table\nNo data duplication\nEasy to reconstruct full state at any node using indexes\n\n\n\nColor coding:\n\nPink: Flow nodes\nLight blue: Original node results\nYellow: Symbol table entries\nBlue: Node result indexes\n\nThe optimization becomes more apparent as data flows through the nodes - while the original storage duplicates values at each step, the symbol table approach maintains a single copy of each unique value and uses lightweight indexes to reference them.\nWould you like me to adjust any part of the diagram or add more detail to better illustrate the data flow?</div>\n </div>\n \n <div class=\"message human\">\n <div class=\"role\">Human</div>\n <div class=\"content\">It will be with a technical member on the team, please come prepared to talk through a technical project that you've worked on previously. Be ready to walk through the specific pieces of the project you owned, the architecture of the system, and the impact of the decisions made. You should be prepared to describe the project for at least ~20 minutes, going into significant technical detail about key components, also feel free to prepare diagrams and slides if that will help you describe the project.</div>\n </div>\n \n <div class=\"message assistant\">\n <div class=\"role\">Claude</div>\n <div class=\"content\">I'll help you prepare for a technical deep dive interview by creating a comprehensive walkthrough of PEP 186 based on the document. Let's structure this with both a narrative outline and supporting diagrams.\nSystem ArchitectureClick to open diagram\nNode Result CompressionClick to open diagram\nHere's the technical deep dive outline for a 20-minute discussion:\n\nProject Overview (2 minutes)\n\nProblem: Rapidly growing Aurora storage costs\nGoals: 80% data reduction, 50% cost reduction for key customers\nPerformance constraints: <5ms latency impact\nKey customer impact: Isaac, Branch, and Zilch use cases\n\n\nTechnical Analysis (3 minutes)\n\nRoot cause: Data duplication in node results\nInitial compression analysis showing ~20.6% with basic approach\nTheoretical limits study using lz4 compression\nDecision to use symbol tables based on append-only nature of data\n\n\nCore Technical Solution (5 minutes)\n\nSymbolTableV2 class design:\npythonCopyclass SymbolTableV2(BaseModel):\n version: t.Literal[\"v2\"]\n tbl: list[orjson.Fragment] = []\n tix: list[SymbolTableRef] = []\n\nKey optimizations:\n\nKey-value pair deduplication vs value-only\nIndex-based references vs 64-bit hashes\nZSTD compression for serialized data\n\n\nStorage schema decisions and rationale for separate table\n\n\nSystem Architecture (4 minutes)\n\nPipeline components:\n\nRunner: Symbol table creation and compression\nEventBridge: Message transport\nArchiver: Persistence and version handling\nExporter: Decision history API integration\n\n\nDatabase schema evolution\nBackward compatibility handling\n\n\nEdge Cases & Challenges (3 minutes)\n\nAsync resumptions and out-of-order messages\nFlow of flows and nested decision handling\nLoop handling\nProvenance data preservation\nVersion compatibility during rollout\n\n\nResults & Impact (3 minutes)\n\nCompression improvement: 20.6% → 86%\nMessage size reduction: 65KB → 25KB\nCost impact for key customers\nPerformance metrics showing <5ms latency impact\nStorage growth rate flattening\n\n\n\nKey technical points to emphasize:\n\nWhy we chose a separate symbol table vs using the existing data table\nTrade-offs in moving deduplication upstream to Runner\nCompression algorithm selection and performance implications\nSchema design decisions and indexing strategy\nBackward compatibility approach\n\nQuestions to be prepared for:\n\nWhy not use database-level compression (TOAST)?\nAlternative approaches considered\nScalability implications\nRecovery and error handling\nPerformance monitoring and alerts\nFuture optimization opportunities\n\nWould you like me to expand on any of these sections or create additional diagrams to illustrate specific components?</div>\n </div>\n \n </body>\n </html>\n"`, {
4headers: {
5"Content-Type": "text/html",
891. Click `Fork`
102. Change `location` (Line 4) to describe your location. It accepts fairly flexible English descriptions which it turns into locations via [nominatim's geocoder API](https://www.val.town/v/stevekrouse/nominatimSearch).
113. Click `Run`
12
sqliteExplorerAppREADME.md1 match
13## Authentication
1415Login to your SQLite Explorer with [password authentication](https://www.val.town/v/pomdtr/password_auth) with your [Val Town API Token](https://www.val.town/settings/api) as the password.
1617## Todos / Plans
sqliteExplorerAppmain.tsx2 matches
27<head>
28<title>SQLite Explorer</title>
29<link rel="preconnect" href="https://fonts.googleapis.com" />
3031<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
32<link
33href="https://fonts.googleapis.com/css2?family=Fira+Code:wght@300..700&family=Source+Sans+3:ital,wght@0,200..900;1,200..900&display=swap"
34rel="stylesheet"
35/>
TaylorSwiftMomentsmain.tsx10 matches
207const fetchImages = async () => {
208try {
209const response = await fetch("/api/images");
210const images = await response.json();
211setGalleryImages(images);
220try {
221// Enhance the prompt using the server-side endpoint
222const enhancedPromptResponse = await fetch("/api/enhance-prompt", {
223method: "POST",
224headers: { "Content-Type": "application/json" },
232233const fal = createFalClient({
234proxyUrl: "/api/fal/proxy",
235});
236260const likeImage = async (imageId) => {
261try {
262await fetch(`/api/images/${imageId}/like`, { method: "POST" });
263fetchImages(); // Refresh the gallery after liking
264} catch (error) {
269const deleteImage = async (imageId) => {
270try {
271await fetch(`/api/images/${imageId}`, { method: "DELETE" });
272fetchImages(); // Refresh the gallery after deleting
273} catch (error) {
278const addToGallery = async () => {
279try {
280await fetch("/api/images", {
281method: "POST",
282headers: { "Content-Type": "application/json" },
439}
440441if (url.pathname === "/api/fal/proxy") {
442return falProxyRequest(req);
443}
444445if (url.pathname === "/api/enhance-prompt") {
446if (req.method === "POST") {
447const { prompt, era } = await req.json();
471}
472473if (url.pathname === "/api/images") {
474if (req.method === "GET") {
475const images = await sqlite.execute(`
495}
496497if (url.pathname.startsWith("/api/images/")) {
498const id = url.pathname.split("/")[3];
499if (url.pathname.endsWith("/like") && req.method === "POST") {
vigilantRedHamstermain.tsx1 match
5async function fetchRandomJoke() {
6const response = await fetch(
7"https://official-joke-api.appspot.com/random_joke",
8);
9return response.json();
toowiredmemoriesmain.tsx35 matches
1import { v1 as uuidv1 } from "https://esm.sh/uuid@9.0.0";
2import { blob } from "https://esm.town/v/std/blob";
3import { getPolicy } from "https://esm.town/v/xkonti/memoryApiPolicy";
4import { Hono } from "https://esm.sh/hono@3.5.4";
56export const handleMemoryApiRequest = async (
7req: Request,
8apiName: string,
9contactEmail: string,
10lastPolicyUpdate: string,
11blobKeyPrefix: string,
12apiKeyPrefix: string,
13) => {
14// ==== HELPERS ====
47}
4849const expectedKey = Deno.env.get(apiKeyPrefix + key);
50if (token !== expectedKey) {
51return { memoriesKey: "", error: c.text("Forbidden", 403) };
55};
5657// API
5859const app = new Hono();
283284app.get("/privacy", async (c) => {
285const policy = getPolicy(apiName, contactEmail, lastPolicyUpdate);
286c.header("Content-Type", "text/html");
287return c.html(policy);
288});
289290app.get("/openapi", async (c) => {
291const specification = `
292{
293"openapi": "3.1.0",
294"info": {
295"title": "Memories and Conversations API",
296"description": "API for managing and storing long-term memories, AI conversations, and file attachments.",
297"version": "1.3.0"
298},
299"servers": [
300{
301"url": "<APIURL>"
302}
303],
327},
328"401": {
329"description": "Unauthorized - Missing or invalid API key."
330},
331"403": {
332"description": "Forbidden - Invalid API key."
333}
334},
372},
373"401": {
374"description": "Unauthorized - Missing or invalid API key."
375},
376"403": {
377"description": "Forbidden - Invalid API key."
378}
379},
423},
424"401": {
425"description": "Unauthorized - Missing or invalid API key."
426},
427"403": {
428"description": "Forbidden - Invalid API key."
429},
430"404": {
483},
484"401": {
485"description": "Unauthorized - Missing or invalid API key."
486},
487"403": {
488"description": "Forbidden - Invalid API key."
489}
490},
513},
514"401": {
515"description": "Unauthorized - Missing or invalid API key."
516},
517"403": {
518"description": "Forbidden - Invalid API key."
519},
520"404": {
563},
564"401": {
565"description": "Unauthorized - Missing or invalid API key."
566},
567"403": {
568"description": "Forbidden - Invalid API key."
569}
570},
613},
614"401": {
615"description": "Unauthorized - Missing or invalid API key."
616},
617"403": {
618"description": "Forbidden - Invalid API key."
619},
620"404": {
673},
674"401": {
675"description": "Unauthorized - Missing or invalid API key."
676},
677"403": {
678"description": "Forbidden - Invalid API key."
679},
680"404": {
724},
725"401": {
726"description": "Unauthorized - Missing or invalid API key."
727},
728"403": {
729"description": "Forbidden - Invalid API key."
730},
731"404": {
743"get": {
744"summary": "Get privacy policy",
745"description": "Retrieve the privacy policy for the API.",
746"responses": {
747"200": {
758}
759},
760"/openapi": {
761"get": {
762"summary": "Get OpenAPI specification",
763"description": "Retrieve the OpenAPI specification for the API.",
764"responses": {
765"200": {
766"description": "Successful retrieval of the OpenAPI specification.",
767"content": {
768"application/json": {
toowiredmemoriesREADME.md21 matches
1A simple Rest API that allows for you GPT to save and recall snippets of data (memories). You can read my blog post explaining it in detail here: [xkonti.tech](https://xkonti.tech/blog/giving-gpt-memory/)
23# Demonstration
7
89What GPT sent do the API:
1011```json
24# Setup
2526There are several steps to set up the API:
27- deploy and configure the API
28- create the API key for your GPT
29- add an action for the API in you GPT
30- add prompt section to your GPT so that it can use it properly
3132## Deploying the API on Val Town
3334Deploy your own memory API. You can fork the following Val to do it: https://www.val.town/v/xkonti/memoryApiExample
3536In the code configure the appropriate values:
3738- `apiName` the name of your API - used in the Privacy Policy (eg. `Memory API`)
39- `contactEmail` - the email to provide for contact in the Privacy Policy (eg. `some@email.com`)
40- `lastPolicyUpdate` - the date the Privacy Policy was last updated (eg. `2023-11-28`)
41- `blobKeyPrefix` - the prefix for the blob storage keys used by your API - more info below (eg. `gpt:memories:`)
42- `apiKeyPrefix` - the prefix for you API Keys secrets - more info below (eg. `GPTMEMORYAPI_KEY_`)
4344## Create API keys
4546The Memory API is designed to serve multiple GPTs at the same time. Each GPT should have it's own unique **name** and **API key**.
4748The **name** is used for identifying the specific GPT and appended to both:
49- `blobKeyPrefix`- to maintain separate memory storage from other GPTs
50- `apiKeyPrefix` - to maintain separate API key for each GPT
51521. Please pick a unique alphanumeric name for your GPT. For example `personaltrainer`.
532. Generate some alphanumeric API key for your GPT. For example `Wrangle-Chapped-Monkhood4-Domain-Suspend`
543. Add a new secret to your Val.town secrets storage. The Key should be the picked name prefixed by `apiKeyPrefix`. Using the default it would be `GPTMEMORYAPI_KEY_personaltrainer`. The value of the secret should be the API key itself.
5556The memories of the GPT will be stored in the blob storage under the key `blobKeyPrefix + name`, for example: `gpt:memories:personaltrainer`.
59601. Add a new action in your GPT.
612. Get the OpenAPI spefication by calling the `/openapi` endpoint of your API
623. Change all `<APIURL>` instances within the specification to the url of your deployed API. For example `https://xkonti-memoryapiexample.web.val.run`
634. Set the authentication method to basic and provide a [base64 encoded](https://www.base64encode.org/) version of the `<name>:<apiKey>`. For example: `personaltrainer:Wrangle-Chapped-Monkhood4-Domain-Suspend` -> `cGVyc29uYWx0cmFpbmVyOldyYW5nbGUtQ2hhcHBlZC1Nb25raG9vZDQtRG9tYWluLVN1c3BlbmQ=`
645. Add the link to the privacy policy, which is the `/privacy` endpoint of your API. For example: `https://xkonti-memoryapiexample.web.val.run/privacy`
6566## Adding the prompt section
86```
8788Migrated from folder: APIs/toowiredmemories
fetchNewPublicGitHubReposmain.tsx3 matches
7const order = "desc";
89const url = `https://api.github.com/search/repositories?q=${encodeURIComponent(query)}&sort=${sort}&order=${order}`;
1011const response = await fetch(url, {
12headers: {
13"Accept": "application/vnd.github.v3+json",
14"User-Agent": "Deno-GitHub-Repo-Fetcher", // GitHub API requires a user-agent header
15},
16});
1718if (!response.ok) {
19throw new Error(`GitHub API responded with status: ${response.status}`);
20}
21