blob_adminREADME.md1 match
9[](https://www.val.town/v/stevekrouse/blob_admin_app/fork)
1011It uses [basic authentication](https://www.val.town/v/pomdtr/basicAuth) with your [Val Town API Token](https://www.val.town/settings/api) as the password (leave the username field blank).
1213# TODO
891. Click `Fork`
102. Change `location` (Line 4) to describe your location. It accepts fairly flexible English descriptions which it turns into locations via [nominatim's geocoder API](https://www.val.town/v/stevekrouse/nominatimSearch).
113. Click `Run`
12
twitterAlertREADME.md1 match
13Change the `query` variable for what you want to get notified for.
1415You can use [Twitter's search operators](https://developer.twitter.com/en/docs/twitter-api/v1/rules-and-filtering/search-operators) to customize your query, for some collection of keywords, filtering out others, and much more!
1617## 3. Notification
excessPlumFrogREADME.md3 matches
6* Fork this val to your own profile.
7* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
8* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
9* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
10* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.
1112<img width=500 src="https://imagedelivery.net/iHX6Ovru0O7AjmyT5yZRoA/7077d1b5-1fa7-4a9b-4b93-f8d01d3e4f00/public"/>
largeAmaranthCatREADME.md3 matches
6* Fork this val to your own profile.
7* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
8* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
9* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
10* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.
1112<img width=500 src="https://imagedelivery.net/iHX6Ovru0O7AjmyT5yZRoA/7077d1b5-1fa7-4a9b-4b93-f8d01d3e4f00/public"/>
6* Fork this val to your own profile.
7* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
8* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
9* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
10* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.
1112<img width=500 src="https://imagedelivery.net/iHX6Ovru0O7AjmyT5yZRoA/7077d1b5-1fa7-4a9b-4b93-f8d01d3e4f00/public"/>
valleBlogV0README.md1 match
1* Fork this val to your own profile.
2* Create a [Val Town API token](https://www.val.town/settings/api), open the browser preview of this val, and use the API token as the password to log in.
3
dailyDadJokeREADME.md2 matches
113. 🤣🤣🤣🤣
1213## API
1415This val uses the [icanhazdadjoke API](https://icanhazdadjoke.com/api). You can find [more docs here](https://github.com/15Dkatz/official_joke_api), such as how to [filter by type](https://github.com/15Dkatz/official_joke_api?tab=readme-ov-file#grab-jokes-by-type).
dailyDadJokemain.tsx1 match
34export async function dailyDadJoke() {
5let { setup, punchline } = await fetchJSON("https://official-joke-api.appspot.com/random_joke");
6return email({
7text: punchline,
legitimateTanTigermain.tsx12 matches
1/**
2* This code creates a search engine prototype with autocomplete functionality using the Cerebras LLM API.
3* It uses React for the frontend and the Cerebras API for generating autocomplete suggestions.
4* The suggestions are cached in the browser to reduce API calls.
5* It implements a two-step LLM process: first to get initial suggestions, then to filter them for sensibility and ethics.
6* If the second LLM call fails, it displays "Failed to fetch" instead of showing results.
104if (request.method === "POST" && new URL(request.url).pathname === "/suggestions") {
105const { query } = await request.json();
106const apiKey = Deno.env.get("CEREBRAS_API_KEY");
107const initialSuggestions: string[] = [];
108109if (!apiKey) {
110return new Response(JSON.stringify({ error: "API key not found" }), {
111status: 500,
112headers: { "Content-Type": "application/json" },
115116try {
117const response = await fetch("https://api.cerebras.ai/v1/chat/completions", {
118method: "POST",
119headers: {
120"Content-Type": "application/json",
121"Authorization": `Bearer ${apiKey}`,
122},
123body: JSON.stringify({
137138if (!response.ok) {
139throw new Error("Failed to fetch from Cerebras API");
140}
141146147// Second LLM call to filter suggestions
148const filterResponse = await fetch("https://api.cerebras.ai/v1/chat/completions", {
149method: "POST",
150headers: {
151"Content-Type": "application/json",
152"Authorization": `Bearer ${apiKey}`,
153},
154body: JSON.stringify({
189});
190} catch (error) {
191console.error("Error calling Cerebras API:", error);
192return new Response(JSON.stringify({ error: "Failed to fetch" }), {
193status: 500,