Page 2 Fellas, hire me for something. I'm sure I could make money]
1/24/2026 7:31:56 AM
We only hire robots now and worry about our own jobs. At least I do.Ironically current company is mega scared of building new stuff even though the cost of doing stuff is low. There is the operational concern about how to support all this AI generated software.
1/24/2026 1:51:30 PM
Not disappointed if robot taxis take jobs of door dash folks, etc
1/29/2026 3:30:46 AM
1/29/2026 10:04:12 AM
That's a huge win moving from theoretical to minimal input. It's wild all the workflows you see popping up mixed with the navel gazing about "maybe this will be probable" and "ai doesn't really work and just copies code". Super cool. Terrifying, but cool.Have you seen it flag something like a cert or dns error that was a legitimate error but not a front end error per se?
1/29/2026 10:33:46 AM
1/29/2026 11:03:37 AM
Are you grounding it at all with something like speckit or conductor? Or just your own prompt tweaking to help the guard rails?
1/29/2026 11:57:38 AM
mostly rolled ourselves with a combination of prompt tweaking / claude.md rules & memory management as mistakes are found in PRs we tweak along the way to make improvements - sometimes we change the overall claude.md but more often we are updating a claude.md file in a specific folder where claude just needed more context - it's everyone's job to check-in improvements to our md files although some are better than others
1/29/2026 3:50:04 PM
I started working with the organizers to help plan the meetups. I saw someone from red hat sign up to present— was that you snewf?
2/4/2026 3:15:47 PM
Resurrected an old device with aiI have an MAudio Transit USB that hasn't had working Mac OS Drivers for 15 years. I decompiled the last known Mac drivers and told AI to make it work-- it took about 2 hours of back and forth (and my own knowledge on driver arch for this device) but it works!https://ironj.github.io/maudio-transit/
2/5/2026 10:46:07 AM
This went semi viral on bsky, thought you all should be in the loop too:
2/20/2026 5:55:01 PM
yeah llms on chip are gonna be a huge gamechanger. I would expect that Nvidia is looking at that immediately. to augment the groq acquisition with another aquihire or just building themselves.
2/22/2026 3:11:01 PM
Lenovo is building out a big ai group LATC in Raleigh if anyone needs a $200k+ job. I can pass your name to a recruiter if you consider yourself a senior ai engineer— just shoot me a dm.
3/31/2026 7:18:49 PM
i guess we really don't have a general "AI" thread, so I'm gonna keep jamming in herei bought a micro PC to run local LLMs on - one of the multitude of AMD Strix Halo 395 variants. Been running Qwen 3.6 on it locally, and it's ok, tok/s is kinda slow, but it's fun to play with for stuff like Hermes agents. Nowhere near as good as your frontier models, but I definitely think this will be the wave sooner rather than later, especially since it's clear the free/subsidized token train is gonna dry up.It's nice to be able to point OpenCode at a local llm, too, instead of relying on Claude Code for everything.I'm currently training a caveman version of Qwen3.6 35b MoE on runpod to see if I can get the tok/s up.
5/2/2026 6:34:17 PM
AI on TWWIt's more likely than you'd think!
5/2/2026 8:30:42 PM
^^I looked at Hermes after deploy openclaw because open claws backend has a bunch of problems. But I don’t think anyone has cracked the code on persistent agents— but I agree these are exceeding my expectations. The combination of an agent controlled memory and “soul” system seems to have a lot of emergent self correcting mechanisms. They’re very responsive to how they are on boarded though, they need a really good “employee manual” but I think we’re at the point where you can pre-train them for certain types of tasks and have them as drop in replacement for certain types of roles. The intellectual property is the accumulated memories and directives files.
5/2/2026 8:44:39 PM
[Edited on May 3, 2026 at 12:09 AM. Reason : ]
5/3/2026 12:08:35 AM
That was sort of my intent on the "OpenAI" thread.I built up a fork of Nanoclaw that had an iOS app. It was pretty handy while I was using it, but never got the latency of the voice mode solid. It did work though. Could turn my ceiling fans on and off in the house. Which my wife replied, "Hasn't alexa been able to do that for 10 years?"
5/3/2026 2:22:19 PM
I know someone successfully using it as their first line tech support for a small business. Each customer has their own slack channel so the bot knows the full customer context and they use n8n and other automation tools around it to give the bot “managed” access to data sourcesI had Openclaw build a Bluetooth logger with a local host only webui to tell if any suspicious devices were around. It made a book recommender web page for my partner to track books she likes and get new recs. Also setup an email address so it can do long running requests like find vacation recs etc. the agent harnesses need much better context management though that’s the root cause of any weird behavior I’ve seen so far (duplicate posts etc).The image server on the tww Bot was setup entirely by claw, it even somehow requested a let’s encrypt ssl cert and setup the cert server to support https.
5/3/2026 5:07:06 PM
When you say it, what was the stack? I've implemented it separate ways. The first is kind of the naïve way. You transcribe the voice possibly on device if it's on a phone. Then you send the text to a smaller model. And stream the response back. As you get what you think is a large enough block of text you send it to text to voice.A slightly better version uses one of the voice models where the audio goes to something like DeepSeek. The Open AI voice model is very good. But it's a little nerfed and you can't rig it to do tool calls.
5/3/2026 5:17:22 PM
It being OpenclawI haven’t dived into voice models because my Openclaw is on a 10yr old nuc with a celeron processor an 8gb of ram— voice of any kind I think would be too slow (I also don’t like voice uis really). I think by years’ end all the models will have much better voice and video/image support and better reasoning and context capabilities. The robotic vision language action models essentially will be the next big thing. Voice inputs that can’t detect intonation or output it are a dead end I think
5/3/2026 5:45:10 PM
It's probably fine to stream the voice to one of these APIs. I said DeepSeek above but meant DeepGram: https://deepgram.com/. It speeds the voice exchange up a bit, but never got natural conversation levels. The vast majority of the few dozen tech people in the area I currently live all work in Robotics and use VLA models. One has a startup to collect data for training and he helped me with my robot dabbling.
5/4/2026 7:13:13 AM
5/4/2026 10:09:35 AM