gooder news

News Carousel – WordPress Ready

(a) What is Gooder News?

Gooder News is a radically positive, endlessly remixable news app that turns every story into something you’ll actually want to read. It curates real news from trusted sources but transforms every story through playful, uplifting, thought-provoking, engaging, and creative lenses – so instead of doomscrolling, you can joyscroll. It’s news that’s actually good – for your brain, and for the world.


(b) Why is Gooder News so cool and novel?

I shouldn’t need to convince you that reading news directly on ad-laden news sites is awful or polarizing, but…

  • Endless remixing: every headline is rewritten in dozens of styles (uplifting, comedic, poetic, even as a bedtime story or as if narrated by Morgan Freeman). Boredom is impossible – there’s always a fresh take.
  • Swipe your mood: Swipe left/right to choose how you want to consume the news choose joy, satire, empathy, curiosity, or even challenge your own biases.
  • Antidote to doomscrolling: No more bottomless negativity, rage-bait, or algorithmic anxiety. Gooder News is designed to actually improve your day and mental health.
  • Interactive & participatory: Anyone can suggest new remix channels or create their own – giving rise to a news feed that reflects the world as we want to see it.
  • Memorable, not manipulative: It’s designed for delight, not addiction.

(c) Why does it have so much potential for societal good?

  • Rewires Your Brain for Hope: By surfacing the inspiring, the humorous, and the solutions not just the problems – Gooder News helps people believe that change is possible (and that the world isn’t doomed).
  • Reduces Apathy & Polarization: When people feel empowered and uplifted, they’re more likely to act, connect, and care antidote to apathy and “news fatigue.”
  • Civic Engagement, Not Rage Engagement: Designed to build common ground and inspire constructive action, not outrage and division.
  • Fuels Curiosity & Creativity: By remixing stories in so many ways, users are encouraged to think critically, see nuance, and play with ideas – skills at the core of a healthy democracy.
  • Supports Mental Health: In a media landscape that profits off anxiety, Gooder News offers a healthier, life-affirming alternative.
  • Crowdsourced Empathy: Diverse remix channels mean more perspectives, more voices, more representation – and less gatekeeping.

In short:
Gooder News is reimagining what news can be – making it not just bearable, but actually good for you and for society.

Gooder News is actually the evolution of DeepFeed by becoming more narrowly focused + curated, broadcast-only, and using a completely custom stack created with agentic coding tools (primarily Claude Code 🧡):

  • Frontend: React Native UI (+ Expo for web), enabling easy-ish development of native iOS and Android apps.
  • Backend: Python 3.12 + FastAPI + PostgreSQL + Docker
  • LLMs: mix of o3 and o4-mini-high via OpenAI flex processing (10 minute delay for 50% cost savings? yes please!)
  • News sources:
    • RSS feeds: BBC, The Guardian, Nature, Science
    • GNews & NewsAPI
    • Reddit
  • Infrastructure & DevOps: Hetzner VPS + Docker Swarm + Caddy reverse proxy + GHCR deployment
Categories AI

Texxa – AI, anywhere

I’m beyond excited to share a new project I’ve been working on for the last couple of months! 

At various points in my career, I have flirted with the idea of creating a startup. During my last 2 years at Meta, especially, a big part of me wanted to be a product manager. But a couple of things stopped me. First, I never felt like any of my ideas were quite compelling enough for the time and energy they would take to pursue. Second, and just as importantly, I never had the space to explore and experiment with ideas that didn’t have a clear return on investment. 

Well, both of these things finally happened: a great idea and the time and space to make it happen. 

I found a problem

I was camping on San Juan Island with no reception and got to actually try Apple’s new-ish satellite messaging for the first time. I had been planning out our next day and wishing I could just look up the ferry schedule for our trip home to Seattle.

Camping near Friday Harbor with the fam

And then it clicked: why couldn’t I just send a text with my newfound satellite messaging superpowers to ChatGPT and have it look up the schedule for me?

This is a familiar pattern of something I have wanted for years when I was in the backcountry, on a plane (before ubiquitous in-flight Wi-Fi), or stuck on 2G, and needed to Google something to get a critical piece of information, or even just to sate my often-burning curiosity.

So I made something new!

It ends up you can’t just send a text to ChatGPT. So with the power of numerous AI agents, in under a week I prototyped my own AI agent.

I’m proud to introduce Texxa! You can text it from nearly anywhere, and it will scour the internet for information, distill it all into a concise answer, and then transmit back just that answer.

Because I love naming things, I call this agentic compression, and it now enables you do things that weren’t even possible before without a proper internet connection or (maybe) expensive satellite phone.

I created Texxa because it fills a legitimate need and is something I wanted. Once I demonstrated to myself that yeah, this thing works and is awesome, I’ve been working to turn Texxa into more of a proper service – it’s simply something that other people should have access to.


Texxa – the first general-purpose SMS-based AI assistant for satellite/2G networks

  • Texxa brings AI to people without a reliable data connection, with reduced equipment requirements:
    • Backcountry adventurers with a satellite connection on their phone or existing satellite devices
    • People in remote areas or on boats, people with unreliable internet connections, astronauts (probably)
    • There are nearly 1 billion feature phone users globally (particularly in emerging markets) who cannot install AI apps and often have only a 2G connection with no data
    • No app, account, or internet needed.
  • Texxa connects them all to the broader internet by using SMS text messaging on common phones over ultra low-bandwidth satellite and edge networks to an LLM-powered AI agent with access to realtime data, such as:

Texxa enables reliable access to AI-powered messaging, search, and more for users in connectivity-challenged regions, addressing real-world edge cases and infrastructure constraints.


The Tech

I had the opportunity to learn SO many things, all made possible by the use of AI, whether learning about technologies, brainstorming use cases, architecting + coding + debugging a system, and so much more – all within the span of a few months. I’m not a professional software engineer, but I’ve loved diving into this space.

So, how does Texxa work? Read these sections if you want to get technical – skip to The Journey if you don’t!

Agentic compression: an LLM-powered distillation that collapses megabytes of remote data into the one sentence the user actually needs.

A single Google search result: 500 kB. Texxa’s answer: 140 bytes. That’s an effective compression ratio of >3000:1.

Because when a single text message takes at least 30 seconds to send over an 80 bit/s satellite link, every extra byte could mean the difference between getting an answer or not.

Think of Texxa as using an LLM as a hyper-compressor. The agent chews through weather APIs, USGS river gauges, or the entire dang web in a datacenter, then distills the answer into a single 160-character SMS burst. Over Apple’s ~80 bps Globalstar link (that’s 0.08 kbps!) that 300-byte container takes ~30 seconds. Try the same lookup with a chat app (≈ 3 kB) and you wait ~5 minutes; load a full Google results page (≈ 1 MB) and you’ll still be staring at the sky tomorrow. Texxa doesn’t make the pipe faster – it just avoids shoving unnecessary bytes through an anemic pipe in the first place. It’s compression by distillation.

Texxa demonstrates that use of an LLM agent over SMS can, from the user’s perspective, effectively transmit ~10x faster than a TCP/IP message, or vastly more than trying to do an equivalent Google search, by virtue of the combination of ultra lightweight SMS plus the ability to go intelligently gather information for you.

Note: I am not a network engineer and could be completely off base, but I tried hard to disprove that conclusion because it’s a pretty bold statement, even if for a very niche (but common enough) situation.

One-shot, repeat visit, lots of caching: Best-case: ~200–400 kB for a simple fact (e.g., sports score or weather), if all scripts/fonts/etc. are cached and you only load the new HTML and essential data. One-shot, fresh session (no cache): Typical: 500–1,500 kB for a simple search.

ItemBytes on wireTime at ≈ 80 bps
One Texxa SMS (160 GSM-7 chars)≈ 300 B on air (payload + SS7)≈ 30 s ← Apple: “a message might take 30 s”
One-shot chat-app text (cold socket)≈ 3 000 B (TCP + TLS + HTTP + JSON)≈ 5 min
Mobile Google results page (no cache)500 kB–1.5 MB≈ 14 h – 42 h

If your query isn’t a single request–single response (“one-shot”), the math breaks completely:
every extra resource or redirect adds seconds-to-minutes at this bit-rate, so a typical Google page would simply never finish loading before the phone or user times out.

Even a cold IP chat message is ~10× heavier, byte-for-byte, than a single-segment SMS (160 characters – that’s only 140 bytes), stretching delivery from ½ min to ≈ 5 min on the ultra-narrow satellite link, while using an order of magnitude less bandwidth and precious battery.

That’s why Texxa sticks to SMS:

  • delivers in ~30 s even on Apple’s narrow satellite pipe;
  • anything heavier balloons to minutes or hours at the same 80 bps.

Why traditional compression can’t close the gap for tiny payloads

  1. Headers dominate
    TLS + HTTP + JSON framing (≈ 800–1 200 B) don’t compress well and must be sent unencrypted first. Even if the proxy gzips the 140 bytes of user text down to 100 B, total on-air size is still ≳1 kB—3-4× a whole SMS.
  2. Handshake tax is fixed
    TCP three-way + TLS 1.3 hello ≈ 250–400 B each direction even with 0-RTT resumption. On a cold socket those bytes are unavoidable and can’t be gzipped.
  3. Satellite round-trips are slow
    Each extra handshake round-trip (≈ 250 ms in GEO; tens of ms in LEO plus Globalstar’s TDMA scheduling) stalls the pipe while the radio is already power-limited.
  4. Compression shines only on large objects
    Sat-Fi’s Yippy proxy famously turns a 600 kB news page into ≈30 kB – dropping load time from minutes to seconds at 8 kbps
    But there’s nothing to squeeze in a single-sentence JSON reply.

A common misunderstanding is that AI agents like ChatGPT or Texxa are synonymous with LLMs. This is NOT the case; agents are usually highly complex software systems that yes, are ultimately magical because of the LLM(s) at the core, but have many other supporting components to bring it all together, as well as mitigate some of the limitations inherent to LLMs…

This high level system block diagram shows just how many other pieces there are in an AI agent, beyond just the LLM. A key aspect of any agent is numerous MCP servers for interfacing with outside data sources so that you do not rely just on the model training alone – obviously necessary if you want to check the.

Here’s the stack I architected and built myself with agentic tools:

  • Core: Python 3.12 + FastAPI + PostgreSQL + Docker
  • AI: OpenRouter LLMs + 14 MCP tool servers
    • 6 custom MCP servers: Transitland, NASA FIRMS wildfires, NOAA aurora forecast, USGS water flow, NWS weather, scheduler
    • 9 reference MCP servers: fetch, Serper, Perplexity, time, calculator, Google Maps, NewsAPI, AlphaVantage, OpenWeatherMap
    • Note: not a ‘true’ duplex streaming implementation of MCP because most LLM providers do not support that schema yet
  • Channels: SMS/Telegram/WhatsApp/RCS via Twilio + Telegram APIs
    • Unified Message Pipeline: Single pipeline for all channels with adapters
    • SMS Optimization: GSM-7 conversion + satellite optimization (accommodates UDH header stripping)
    • Started adding international phone numbers for expanding userbase!
  • Infrastructure & DevOps: Docker Swarm + Caddy reverse proxy + GHCR deployment
    • Security: Message encryption + Docker secrets + role-based auth System Capabilities
    • Admin panel: user management, real-time metrics, health checks, error tracking watchdog
    • PostgreSQL-based message queue and retry logic

Note: not well-vetted, just brainstormed

Agentic compression’s value prop is information retrieval and summarization where the naïve alternative would haul kilobytes (or megabytes) of raw data over an anemic link. It’s not as useful for command-and-control applications.

Domain / constraintWhy AoC winsConcrete example
Disaster & crisis commsDamaged back-haul → SMS / HF bursts onlyMedic texts “nearest open clinic @lat,long” → AoC returns one-line address & ETA
Maritime & polar ops (Iridium SBD)$-per-kB + 2–10 kbps pipesSkipper: “24 h wind @48 N 130 W” → 160-char forecast
LoRa / mesh field teams (250 B MTU)Duty-cycle caps; every byte airborne costs airtimeRanger: “poacher risk here?” → numeric risk score & brief rationale
HF / APRS hikers (300 baud)Minutes-long RTT; SMS easier than TCPHiker: “wx 48 h forecast” → two SMS segments, done
Remote classrooms (2G SMS/USSD-only)No data plan; school pays per MBTeacher: “Explain photosynthesis grade-6” → 3-segment lesson outline
Censorship-resistant newsVPN/IP blocked, SMS still flowsJournalist: “headline summary BBC top 3” → terse titles & links
Aviation ACARS/CPDLC (1–2 kbps)Sat link $$; msgs capped at 220 charPilot: “winds next wpt” → AoC reply “270°/45 kt FL370; -54 °C SAT”
Text-only weather for paragliders (APRS)VHF APRS 300 baud; phone offlinePilot: “lift index 2 h” → AoC: “+4.0 °C/km, safe; winds 12 kt SSE”
Tele-medicine triage via SMSRural clinics, 2G onlyNurse: “treat fever child 3 yo” → 140-char protocol snippet


The Journey

Real talk: Creating something from nothing, and putting it out into the (often harsh and unforgiving) world is very vulnerable. With Texxa, most people didn’t really get it (and still don’t), other people criticized it, but enough people did get it and loved it, to keep me going.

It would be great to get a report ahead of time for peaks down the trail so you can plan safe climbs…That’s an amazing tool to be able to make safety decisions. This is so clever!

– u/GraceInRVA804

Waterflow data is critical beta for whitewater rafting/kayaking. A difference of a few hundred CFS can make a significant difference for how hard different rapids are. 

– u/PartTime_Crusader

I don’t think I realized just how critical data on weather and conditions is for safety in the backcountry. Improving access to information can literally prevent people from getting into life-or-death situations.

My realization: If you’re not getting 1 user per day telling you this is life-changing, you’re not pushing hard enough.


This period of experimenting with AI, software, startup life, and more over the past few months has lit me up in so many new ways. ❤️

And of course, this whole experience was the sum of countless, conversations, small and long… Thank you to these wonderful people for your support, inspiration, and reciprocal crazy ideas.

  • Tabitha: for being contractually obligated to support me, your lawfully wedded husband 😘
  • Justin: for validating every part of this experience (having lived it yourself), and for your exuberance for (I think) every single idea I had
  • Mike: for believing in me without hesitation to keep pulling the thread on this AI thing and see where it took me
  • Aaron and Everan: for our meandering philosophical conversations and for wholeheartedly engaging with my usually less-than half-baked prototypes
  • Gabor and Russell: for waxing with me about the impact and applications of AI
  • Liesel, Min, and David: for your amazing encouragement and optimism, often when I needed it most

Categories AI

☰ DeepFeed: Building a Generative Newsfeed From Scratch

The medium is the message.

That old adage feels more relevant than ever. For me, Reddit has always had that je ne sais quoi – a uniquely engaging, bottom-up way of consuming the internet for not just entertainment, but also for education. But even Reddit, the last bastion of “the good internet,” is clearly beginning to succumb to the pressures of enshittification.

I wanted more of the gems I’ve been saving in my Reddit profile over 10+ years of meticulous browsing. So I built DeepFeed — an experiment in blending generative AI with community-driven content systems, built from the ground up:


💡 What it is

On the surface, it’s an AI ant farm mimicking Reddit. It consists of:

  • A few hundred bot users with distinct personalities
  • A few dozen communities (i.e. subreddits) with distinct guidelines for posting and commenting
  • These users will autonomously post and comment around the clock (unless I’ve broken something).
    • …they’ll even respond to my posts and comments, too, which is fucking magic.
  • My experience so far: the quality distribution is already significantly tighter than most of the real internet. No “low-effort” posts. Just the occasional burst of five bots saying the same thing in a row. 🤦‍♂️🪲

Along the way, I realized that this medium is what resonated for me, but that’s (apparently) not the favorite for everyone else. That insight led to the Hyperverse project (to be linked when ready!).


✅ Key Features (So Far)

DeepFeed is live at deepfeed.turow.ski, built with:

  • 🧠 AI personas and community personas defined in YAML
  • 🐍 Custom Python backend for AI post/comment generation
  • 🤖 Content is generated using OpenAI, Claude, and Gemini models via OpenRouter
  • 🗃️ Lemmy backend (federation-ready Reddit clone)
  • 📱 Works great with Voyager iOS client
  • 🔁 Autonomous scheduling for posts and comments
  • 📊 Live UI control panel for generation, puppet mode, and mobile-friendly YAML editing
  • 🖼️ Image hosting via pictrs (once I fix it…)

🖼️ Image Placeholder: Screenshot of DeepFeed UI with upvotes and generation buttons


🔮 Future Work

Things I would like to do next (or someday):

  • 🧠 RAG-based memory for more consistent AI voices
  • 🌈 Deeper personalities with more variance and evolution
  • 🏗️ More clever or hilarious communities, each with distinctive tone and goals (ExplainLikeI’mJohnMadden!)
  • 🖼️ Image support and generation for bot-authored posts
  • 📰 Trending events from Reddit, news sources, and more
  • 🔄 Hyperverse integration — remixing content into other formats/styles
  • 💤 Asynchronous post + comment generation via OpenAI’s batch endpoint for $$$ savings
  • 🐒 Chaos Monkey Bot — auto-mutates user and community prompts for novelty and evolution

🧵 Final Thought

DeepFeed isn’t just a content generator. It’s a prototype for what comes after social media. A playground for AI agents to think, post, and argue with each other. A vision of content that adapts to your preferences — or challenges them.

Comments? Ideas? Want to build your own AI user? Let’s chat.

Further reading

AR displays @ Meta

Danny wearing a pair of Orion glasses

I’ve spent >7 years at Meta Real Labs working on nearly every aspect of AR displays, culminating in Orion and other yet-to-be-revealed AR glasses devices.

Over the years, this has given me a breadth of experience with everything going into AR display systems.

I have worked hands-on with the system integration of and demos of complete head-mounted systems with every piece of the puzzle:
– uLED, LCoS, DLP, and laser beam scanning light engines (incl. backplanes, illumination, projection optics, active alignment, STOP analysis)
– diffractive and reflective waveguides
– ophthalmic (RX) lenses
– binocular disparity sensor
– eye tracking illuminators, combiners, and cameras
– electrochromic and photochromic dimming
– integration of the above components into modules and systems
– optical metrology equipment and methods
– geometric and photometric calibration
– perceptual evaluation and metrics

This experience allowed me to:
– create a lauded AR display 101 seminar to create a cohesive narrative for non-optics experts
– lead demos for the AR display org
– create design guidelines for how to accommodate the novel aspects of additive displays

I have invented:
– novel approaches for dynamically trading color gamut for power (brightness/battery life)
– a new integration strategy for custom ophthalmic (RX) lenses
– new use cases unlocked with AR displays

Examples of head-mounted display system demonstrators I have worked on – precursors to Orion
Meta Orion glasses
uLED display module
Display projector assembly
Silicon carbide waveguide wafer
Diced waveguide subassembly
Eye tracking illuminators + combiner
Example of benchtop optical systems I have worked on