Texxa – AI, anywhere

I’m beyond excited to share a new project I’ve been working on for the last couple of months! 

At various points in my career, I have flirted with the idea of creating a startup. During my last 2 years at Meta, especially, a big part of me wanted to be a product manager. But a couple of things stopped me. First, I never felt like any of my ideas were quite compelling enough for the time and energy they would take to pursue. Second, and just as importantly, I never had the space to explore and experiment with ideas that didn’t have a clear return on investment. 

Well, both of these things finally happened: a great idea and the time and space to make it happen. 

I found a problem

I was camping on San Juan Island with no reception and got to actually try Apple’s new-ish satellite messaging for the first time. I had been planning out our next day and wishing I could just look up the ferry schedule for our trip home to Seattle.

Camping near Friday Harbor with the fam

And then it clicked: why couldn’t I just send a text with my newfound satellite messaging superpowers to ChatGPT and have it look up the schedule for me?

This is a familiar pattern of something I have wanted for years when I was in the backcountry, on a plane (before ubiquitous in-flight Wi-Fi), or stuck on 2G, and needed to Google something to get a critical piece of information, or even just to sate my often-burning curiosity.

So I made something new!

It ends up you can’t just send a text to ChatGPT. So with the power of numerous AI agents, in under a week I prototyped my own AI agent.

I’m proud to introduce Texxa! You can text it from nearly anywhere, and it will scour the internet for information, distill it all into a concise answer, and then transmit back just that answer.

Because I love naming things, I call this agentic compression, and it now enables you do things that weren’t even possible before without a proper internet connection or (maybe) expensive satellite phone.

I created Texxa because it fills a legitimate need and is something I wanted. Once I demonstrated to myself that yeah, this thing works and is awesome, I’ve been working to turn Texxa into more of a proper service – it’s simply something that other people should have access to.


Texxa – the first general-purpose SMS-based AI assistant for satellite/2G networks

  • Texxa brings AI to people without a reliable data connection, with reduced equipment requirements:
    • Backcountry adventurers with a satellite connection on their phone or existing satellite devices
    • People in remote areas or on boats, people with unreliable internet connections, astronauts (probably)
    • There are nearly 1 billion feature phone users globally (particularly in emerging markets) who cannot install AI apps and often have only a 2G connection with no data
    • No app, account, or internet needed.
  • Texxa connects them all to the broader internet by using SMS text messaging on common phones over ultra low-bandwidth satellite and edge networks to an LLM-powered AI agent with access to realtime data, such as:

Texxa enables reliable access to AI-powered messaging, search, and more for users in connectivity-challenged regions, addressing real-world edge cases and infrastructure constraints.


The Tech

I had the opportunity to learn SO many things, all made possible by the use of AI, whether learning about technologies, brainstorming use cases, architecting + coding + debugging a system, and so much more – all within the span of a few months. I’m not a professional software engineer, but I’ve loved diving into this space.

So, how does Texxa work? Read these sections if you want to get technical – skip to The Journey if you don’t!

Agentic compression: an LLM-powered distillation that collapses megabytes of remote data into the one sentence the user actually needs.

A single Google search result: 500 kB. Texxa’s answer: 140 bytes. That’s an effective compression ratio of >3000:1.

Because when a single text message takes at least 30 seconds to send over an 80 bit/s satellite link, every extra byte could mean the difference between getting an answer or not.

Think of Texxa as using an LLM as a hyper-compressor. The agent chews through weather APIs, USGS river gauges, or the entire dang web in a datacenter, then distills the answer into a single 160-character SMS burst. Over Apple’s ~80 bps Globalstar link (that’s 0.08 kbps!) that 300-byte container takes ~30 seconds. Try the same lookup with a chat app (≈ 3 kB) and you wait ~5 minutes; load a full Google results page (≈ 1 MB) and you’ll still be staring at the sky tomorrow. Texxa doesn’t make the pipe faster – it just avoids shoving unnecessary bytes through an anemic pipe in the first place. It’s compression by distillation.

Texxa demonstrates that use of an LLM agent over SMS can, from the user’s perspective, effectively transmit ~10x faster than a TCP/IP message, or vastly more than trying to do an equivalent Google search, by virtue of the combination of ultra lightweight SMS plus the ability to go intelligently gather information for you.

Note: I am not a network engineer and could be completely off base, but I tried hard to disprove that conclusion because it’s a pretty bold statement, even if for a very niche (but common enough) situation.

One-shot, repeat visit, lots of caching: Best-case: ~200–400 kB for a simple fact (e.g., sports score or weather), if all scripts/fonts/etc. are cached and you only load the new HTML and essential data. One-shot, fresh session (no cache): Typical: 500–1,500 kB for a simple search.

ItemBytes on wireTime at ≈ 80 bps
One Texxa SMS (160 GSM-7 chars)≈ 300 B on air (payload + SS7)≈ 30 s ← Apple: “a message might take 30 s”
One-shot chat-app text (cold socket)≈ 3 000 B (TCP + TLS + HTTP + JSON)≈ 5 min
Mobile Google results page (no cache)500 kB–1.5 MB≈ 14 h – 42 h

If your query isn’t a single request–single response (“one-shot”), the math breaks completely:
every extra resource or redirect adds seconds-to-minutes at this bit-rate, so a typical Google page would simply never finish loading before the phone or user times out.

Even a cold IP chat message is ~10× heavier, byte-for-byte, than a single-segment SMS (160 characters – that’s only 140 bytes), stretching delivery from ½ min to ≈ 5 min on the ultra-narrow satellite link, while using an order of magnitude less bandwidth and precious battery.

That’s why Texxa sticks to SMS:

  • delivers in ~30 s even on Apple’s narrow satellite pipe;
  • anything heavier balloons to minutes or hours at the same 80 bps.

Why traditional compression can’t close the gap for tiny payloads

  1. Headers dominate
    TLS + HTTP + JSON framing (≈ 800–1 200 B) don’t compress well and must be sent unencrypted first. Even if the proxy gzips the 140 bytes of user text down to 100 B, total on-air size is still ≳1 kB—3-4× a whole SMS.
  2. Handshake tax is fixed
    TCP three-way + TLS 1.3 hello ≈ 250–400 B each direction even with 0-RTT resumption. On a cold socket those bytes are unavoidable and can’t be gzipped.
  3. Satellite round-trips are slow
    Each extra handshake round-trip (≈ 250 ms in GEO; tens of ms in LEO plus Globalstar’s TDMA scheduling) stalls the pipe while the radio is already power-limited.
  4. Compression shines only on large objects
    Sat-Fi’s Yippy proxy famously turns a 600 kB news page into ≈30 kB – dropping load time from minutes to seconds at 8 kbps
    But there’s nothing to squeeze in a single-sentence JSON reply.

A common misunderstanding is that AI agents like ChatGPT or Texxa are synonymous with LLMs. This is NOT the case; agents are usually highly complex software systems that yes, are ultimately magical because of the LLM(s) at the core, but have many other supporting components to bring it all together, as well as mitigate some of the limitations inherent to LLMs…

This high level system block diagram shows just how many other pieces there are in an AI agent, beyond just the LLM. A key aspect of any agent is numerous MCP servers for interfacing with outside data sources so that you do not rely just on the model training alone – obviously necessary if you want to check the.

Here’s the stack I architected and built myself with agentic tools:

  • Core: Python 3.12 + FastAPI + PostgreSQL + Docker
  • AI: OpenRouter LLMs + 14 MCP tool servers
    • 6 custom MCP servers: Transitland, NASA FIRMS wildfires, NOAA aurora forecast, USGS water flow, NWS weather, scheduler
    • 9 reference MCP servers: fetch, Serper, Perplexity, time, calculator, Google Maps, NewsAPI, AlphaVantage, OpenWeatherMap
    • Note: not a ‘true’ duplex streaming implementation of MCP because most LLM providers do not support that schema yet
  • Channels: SMS/Telegram/WhatsApp/RCS via Twilio + Telegram APIs
    • Unified Message Pipeline: Single pipeline for all channels with adapters
    • SMS Optimization: GSM-7 conversion + satellite optimization (accommodates UDH header stripping)
    • Started adding international phone numbers for expanding userbase!
  • Infrastructure & DevOps: Docker Swarm + Caddy reverse proxy + GHCR deployment
    • Security: Message encryption + Docker secrets + role-based auth System Capabilities
    • Admin panel: user management, real-time metrics, health checks, error tracking watchdog
    • PostgreSQL-based message queue and retry logic

Note: not well-vetted, just brainstormed

Agentic compression’s value prop is information retrieval and summarization where the naïve alternative would haul kilobytes (or megabytes) of raw data over an anemic link. It’s not as useful for command-and-control applications.

Domain / constraintWhy AoC winsConcrete example
Disaster & crisis commsDamaged back-haul → SMS / HF bursts onlyMedic texts “nearest open clinic @lat,long” → AoC returns one-line address & ETA
Maritime & polar ops (Iridium SBD)$-per-kB + 2–10 kbps pipesSkipper: “24 h wind @48 N 130 W” → 160-char forecast
LoRa / mesh field teams (250 B MTU)Duty-cycle caps; every byte airborne costs airtimeRanger: “poacher risk here?” → numeric risk score & brief rationale
HF / APRS hikers (300 baud)Minutes-long RTT; SMS easier than TCPHiker: “wx 48 h forecast” → two SMS segments, done
Remote classrooms (2G SMS/USSD-only)No data plan; school pays per MBTeacher: “Explain photosynthesis grade-6” → 3-segment lesson outline
Censorship-resistant newsVPN/IP blocked, SMS still flowsJournalist: “headline summary BBC top 3” → terse titles & links
Aviation ACARS/CPDLC (1–2 kbps)Sat link $$; msgs capped at 220 charPilot: “winds next wpt” → AoC reply “270°/45 kt FL370; -54 °C SAT”
Text-only weather for paragliders (APRS)VHF APRS 300 baud; phone offlinePilot: “lift index 2 h” → AoC: “+4.0 °C/km, safe; winds 12 kt SSE”
Tele-medicine triage via SMSRural clinics, 2G onlyNurse: “treat fever child 3 yo” → 140-char protocol snippet


The Journey

Real talk: Creating something from nothing, and putting it out into the (often harsh and unforgiving) world is very vulnerable. With Texxa, most people didn’t really get it (and still don’t), other people criticized it, but enough people did get it and loved it, to keep me going.

It would be great to get a report ahead of time for peaks down the trail so you can plan safe climbs…That’s an amazing tool to be able to make safety decisions. This is so clever!

– u/GraceInRVA804

Waterflow data is critical beta for whitewater rafting/kayaking. A difference of a few hundred CFS can make a significant difference for how hard different rapids are. 

– u/PartTime_Crusader

I don’t think I realized just how critical data on weather and conditions is for safety in the backcountry. Improving access to information can literally prevent people from getting into life-or-death situations.

My realization: If you’re not getting 1 user per day telling you this is life-changing, you’re not pushing hard enough.


This period of experimenting with AI, software, startup life, and more over the past few months has lit me up in so many new ways. ❤️

And of course, this whole experience was the sum of countless, conversations, small and long… Thank you to these wonderful people for your support, inspiration, and reciprocal crazy ideas.

  • Tabitha: for being contractually obligated to support me, your lawfully wedded husband 😘
  • Justin: for validating every part of this experience (having lived it yourself), and for your exuberance for (I think) every single idea I had
  • Mike: for believing in me without hesitation to keep pulling the thread on this AI thing and see where it took me
  • Aaron and Everan: for our meandering philosophical conversations and for wholeheartedly engaging with my usually less-than half-baked prototypes
  • Gabor and Russell: for waxing with me about the impact and applications of AI
  • Liesel, Min, and David: for your amazing encouragement and optimism, often when I needed it most