vietnamese mud crabdifferent species of crab
9
28 Comments

I can't write a single line of code. I built a multi-AI research platform anyway. Here's what 6 weeks looked like.

I'm not a developer. I don't know Python. I can't read JavaScript. I've never opened a terminal before March 2026.

Six weeks ago, I had $10K and an idea: what if you could make multiple AI models work together on one research question — not just ask ChatGPT and hope for the best?

Today I have a live SaaS platform with 10 users, zero paying customers, and a product that genuinely works.

I built the entire thing by talking to Claude.


The idea that wouldn't leave me alone.

I'm from Christchurch, New Zealand. I trade options for a living — ETH iron condors on Deribit, if that means anything to you.

Trading taught me one thing: no single source of information is reliable. You cross-reference. You verify. You look for what one analyst missed that another caught.

So why do we accept a single AI model's answer as "research"?

I wanted something that worked like a team of analysts:

  • One gathers context and asks clarifying questions
  • One searches the live web for real-time data
  • One writes the deep analysis
  • One does adversarial quality checks
  • One synthesizes everything into a final report

Five stages. Five different AI models. One report.


What "building with AI" actually looks like when you can't code.

It's not magic. It's exhausting.

I'd describe what I wanted. Claude would write the code. I'd deploy it. It would break. I'd paste the error back. We'd fix it. Repeat 200 times a day.

I learned what a "migration" is by accidentally breaking my database at 2am.

I learned what "RLS policies" are because my API returned empty arrays for three hours and I couldn't figure out why.

I learned what "Cloudflare proxy" means because my server-side renders kept failing and nobody could explain why until we bypassed it.

I don't understand most of the code in my repo. But I understand every architectural decision, every data flow, every trade-off. I'm the product manager. The AI is the engineer.


What I actually built.

SANICE AI has three products:

Glass — the research pipeline. You ask a question. Five AI models (GPT-4o, Gemini, Grok, Claude) collaborate through a 5-stage pipeline. You get a 3,000+ word research report with charts, citations, and follow-up chat. Under 5 minutes.

Pulse — automated monitoring. Set up alerts on topics from your research. Get daily email digests when something changes.

Collective — multi-model chat. Talk to different AI models in one interface.

Here's an example report it auto-generated: https://sanice.ai/research/macro/research-the-key-inflation-concerns-and-drivers-observed-throughout-2023


What the numbers look like.

  • 10 registered users (all friends and family)
  • 0 paying customers
  • 0 organic users
  • ~$18/month in AI costs
  • $10K budget, ~$3K spent so far
  • Stack: FastAPI, Next.js, Supabase, Railway, Vercel, Cloudflare

I'm not going to pretend these are good numbers. They're not. But the product works, and I needed to stop building and start talking to strangers. This post is part of that.


What AI changed for me.

It let me play a game I had no ticket to.

I couldn't have built this two years ago. Not because the idea didn't exist, but because the barrier to entry was "learn to code for 2 years first." AI removed that barrier entirely.

It changed the risk calculation.

$10K and 6 weeks is survivable. $10K and 2 years is not. That's the difference between "I'll try it" and "I'll think about it forever."

It made me a different kind of founder.

I don't debug code. I debug decisions. "Should we use Redis or Supabase for rate limiting?" "Should Stage 4 use Gemini or Grok for quality checks?" Those are the questions I spend my time on. The AI handles the implementation.


What AI didn't change.

Nobody knows I exist.

This is the part they don't tell you. You can build the best product in the world in 6 weeks, and if nobody knows about it, it doesn't matter.

I spent 6 weeks building. I should have spent 3 building and 3 talking to people.

Reddit blocks my posts (spam filter). Twitter has 0 followers. LinkedIn gets 12 views. Google is slowly indexing my research pages. But "slowly" doesn't pay the bills.

Judgment is still 100% human.

What to build. What to cut. How to price. Who to build for. AI has zero useful opinions on any of these. It will happily build the wrong thing perfectly.


What I'm doing now.

Yesterday I built a content engine that auto-publishes 2 research reports daily — one trending, one evergreen. The idea is that if I can't find users, maybe Google can find them for me through SEO.

It costs $18/month in AI credits. If it works, it's the cheapest marketing channel possible. If it doesn't, I'll know in 30 days.


My honest questions for this community:

I'm at the "I built it and nobody came" stage. If you've been here:

  1. What actually worked to get your first 10 strangers?
  2. Would you use a multi-AI research tool? For what?
  3. Am I solving a real problem, or did I build something cool that nobody needs?

I'd rather hear "this isn't useful" now than discover it in 6 months.


sanice.ai — free tier, no card required

posted to Icon for group Building in Public
Building in Public
on April 12, 2026
  1. 1

    "nobody knows I exist" this is the real wall. not the building part.

    working on something specifically for this problem. people doing real work without the right network, showing up for each other voluntarily. search unsponsored io if curious.

  2. 1

    Respect for shipping and being honest about the numbers. The "stop building, start talking to strangers" line really hit me — I had the same wall with my own indie app (a lightweight memo tool). I spent weeks polishing features nobody asked for before actually showing it to 10 non-friends, and the feedback loop changed everything: half the features I was proud of got cut, and a tiny detail I almost skipped became the thing users mention most. One question — for the 10 users you have now, how often are they actually running reports? Weekly active use vs. "signed up once" is the signal I'd watch before touching pricing or adding more models to the pipeline. What does retention look like after week 1?

  3. 1

    The cross-referencing idea is solid. Single-model answers always have blind spots, and anyone who's done real research knows you never trust one source. That trading background clearly shaped the architecture in a good way.

    What I'd push back on a little: the fact that Claude wrote all the code doesn't mean you don't need to understand it eventually. I write code daily and still use AI for maybe 60-70% of the output, but the debugging, the architecture decisions, the "why is this slow" moments - that's where understanding matters. You'll hit a wall at some point where Claude gives you five different solutions and you won't know which one is right without some technical intuition.

    Not saying that to discourage you. 10 users in 6 weeks with zero coding background is genuinely impressive. Just that the next phase (scaling, paying customers, reliability) tends to require deeper understanding of what's under the hood.

    Curious about the cost per report with 5 different AI models running. That seems like it could get expensive fast.

  4. 1

    I love your story a lot, I'm also practically in the same boat as you as someone starting out with an app and some rubbish stats at the moment... what actually got me started into building my app was one Instagram reel that just had me hooked on with how easy the reel creator made it sound, I am about a week into building my budgeting app called Trakly and I have genuinely learnt that not only is there levels to this... but I should really stop watching reels that get my hopes up super high like the previous one haha.

    I haven't given up though and would like to continue and as much as I would love to give any advice, all I can really say is to keep pushing and do not give up on projects like this one you are currently working on. Even if you found out later what you're building isn't useful to anyone at least you tried, and you can still try again and look into something else that could really work out well for you. So that's all I really have to say on my end, best of luck on your journey to working your way up with what you've built, I truly hope you make it far in life :)

  5. 1

    This is incredibly inspiring! As someone who's also building in the AI space, I love the multi-model approach you've taken. The idea of having different AI models play different roles - context gathering, web search, analysis, quality checks, and synthesis - is brilliant. It mirrors how actual research teams work.

    The 'build by talking to Claude' journey resonates deeply. There's something powerful about being able to focus on the product vision while AI handles the implementation details. Sure, you hit those 2am database migration issues, but you also ship faster than ever before.

    Your trading background giving you the insight about cross-referencing multiple sources is a great example of how domain expertise translates into better product design. Would love to hear more about how you're planning to convert those 10 users to paying customers - that's often the trickiest part for AI tools!

  6. 1

    Man, reading this felt like reading my own diary from the last couple months. I'm in almost the exact same spot - built an AI-powered SEO tool, talked to Claude for hundreds of hours, learned what database migrations and RLS policies are the hard way, and now I'm staring at the "nobody knows I exist" problem.

    Your line about spending 6 weeks building when you should've spent 3 building and 3 talking to people - that's the lesson I keep learning and re-learning. It's so much easier to add one more feature than to cold DM someone and ask if they'd actually pay for what you built.

    For your question about what works to get first strangers - I'm literally figuring this out in real time, but the thing that's shown the most promise so far is finding people who are already complaining about the specific problem in niche communities. Not "people who might be interested someday" but people who posted this week about the exact pain you solve. The conversion rate on those conversations is wildly different from generic outreach.

    Also your auto-publishing content engine idea for SEO is smart. We did something similar and it's the one thing that feels like it compounds over time even when you're not actively pushing it. 30 days is the right timeframe to test it.

    Good luck from a fellow "I can't believe I'm actually doing this" founder.

  7. 1

    i feel you, i'm at the exact same spot as you now, how do i get people in the app and make them stay?
    I wish you success!!

  8. 1

    Firstly, you are one of the only people who proved something I am also learning as someone who is building a SAAS (In the taste your own food before you sell it stage) but sucks at coding.

    Truth is: You don’t need a massive budget or a CS degree. Like every business: You just need 1 idea that genuinely solves a problem and makes someone’s life easier/ better. And if you could build what you did without letting the old ‘only coders can build AI’ myth hold you back, I believe you can find the clients too.

    As a copywriter my biggest practical tip is: Identify EXACTLY whose pain you can solve and sell your AI as a solution to a problem that directly impacts their life in a good way rather than as an AI. Because people only care about what you can do for them not what you can do in general.

    Other than that I would recommend finding the streams of distribution (social media, outreach, building an email list, collecting testimonials, asking friends and family to recommend it to anyone that struggles with the problem you solve).

    Track metrics on your input, output and ultimately what sources actually converted to users. Do it every month, repeat what works, let go of what doesn’t and stay consistent.

    You got this!

  9. 1

    I'm building a UK cleaning marketplace and currently at the "I built it and nobody came" stage.

    1. What worked to get your first 10 strangers?
      I'm using AI to create short-form reels across social media.
      Things like:
      Finding trusted local cleaners
      Cleaning tips
      Local service discovery
      "Book a cleaner near you" style content
      Then posting across multiple platforms daily.
      The goal is simple:
      Use volume and consistency as a free distribution strategy instead of paid ads.
    2. Would I use a multi-AI research tool?
      Yes — I'm already heavily using AI for:
      Marketing ideas
      SEO content
      Growth strategy
      Messaging
      Product direction
      I'm solo building, so AI is basically my growth team.
    3. Am I solving a real problem?
      I believe yes.
      I run a cleaning business and customers constantly ask:
      Do you know a reliable cleaner?
      Can you recommend someone local?
      There's still friction around trust, availability, and transparency.
      I'm trying to simplify that.
      Right now I'm testing:
      AI content
      Organic growth
      Free distribution
      Trying to get the first 10, then iterate from there.
  10. 1

    The statement 'I don't debug code, I debug decisions' appeals to me. Do infrastructure and setup even start to slow you down as things get more complicated or is your existing setup managing it well?

  11. 1

    This is awesome you achieved so much just through prompting. I would still advocate for learning the basics of coding though. At some point, your product will become large enough that Claude will start making mistakes and going in circles. That's where you can put on your developer hat and help it out. A 20 min fix from an AI-assisted developer might take someone three days if they try to solve it through prompting alone.

    1. 1

      Second that, took me 3 months and a update on gtp to solve my issue. 😆

  12. 1

    Really appreciate the transparency here — especially the honest "10 users, zero paying customers" part. Most people skip that. One thing I've noticed with no-code/AI-built products: the hardest part isn't building v1, it's maintaining and iterating when users start requesting features that push the boundaries of what the AI tools can generate. How are you handling that? Do you have a plThe "I built it and nobody came" stage is real, and props for being honest about it. Two things that worked for others in a similar spot: (1) find 5 subreddits where people already ask the type of question your tool answers, then answer their question manually using your tool. If people ask how you did it, share naturally. (2) Your auto-published research reports are a smart SEO play, but add a "powered by 5 AI models" byline with a CTA at the bottom. The reports themselves become the distribution. You are past the hardest part which is actually shipping. The next phase is just reps on distribution.an for when Claude-generated code needs debugging or refactoring at a deeper level?——

  13. 1

    Just keep going, talk to people, make videos send emails do wherever you have to do to tell people about your work be proud of it and let people know it. You already did the easy part, and by the way JAVA make me cry for 1 hour at 2am in a Sunday, now start the hard one and believe me when I said that you will get through this too.

  14. 1

    the interesting bit isn't the no-code part. it's the $10k and an idea → 10 real users in 6 weeks. most people with the same setup never get a single user

  15. 1

    Hey Sanice

    That's awesome, and I can totally relate to this.
    I have ~20 years of experience in design, project management, business strategy... But what I'm not, is a developer.

    I started with tools like builder, figma make, replit, lovable... But they felt like a Wix approach.

    Then I met Claude code. And all the skills I used to put to work with real people, I know utilize with Claude.

    Now, I don't have to wait a week to get a response. Designs are implemented instantly. And with the use of custom /skills and my latest build LayerView, I can rely on Claude to take on agentic role, without expecting 3 refactors.

    Keep shipping, keep learning, I wish you all the best!

  16. 1

    Fellow finance person building with AI here. I trade-ish too (built an earnings analysis tool) so the cross-referencing instinct resonates deeply. No single model gives you the full picture, same as no single analyst covers every angle of an earnings report.

    Your self-awareness about the 6 weeks building vs 3 building and 3 talking split is the most important line in this post. I made the exact same mistake. Built a full scoring engine, polished the UI, added features nobody asked for. Meanwhile zero strangers had ever seen it.

    To answer your question about what worked for first strangers: for me it was SEO through the product itself. Every earnings report I score becomes a page that Google indexes. People searching for specific tickers find the scored page and some of them sign up. It is slow but it compounds and it sounds like your auto-published research reports strategy is the same idea. $18/month for an automated content engine is an incredible ROI if even 5% of those pages rank.

    The honest answer to "am I solving a real problem" is that multi-model research is a real workflow that power users already do manually. The question is whether the people who need it know to search for it. That is the distribution problem, not a product problem.

    Keep posting updates here. The transparency is what makes people want to help.

  17. 1

    6 weeks and a live product is solid. but zero paying customers - that's the next wall. not the code, not the build. how are you thinking about pricing?

  18. 1

    This is honestly one of the most real build-in-public posts I’ve seen.

    You don’t have a product problem — you have a distribution + conversion problem.

    Right now, if someone lands on your site, it’s not immediately clear who it’s for or why they should care. That’s fixable.

    I help early-stage SaaS founders turn “cool tech” into something that actually converts (landing pages, funnels, positioning — especially for AI tools like this).

    If you’re open to it, I’d love to help you refine:
    – your landing page messaging
    – user flow from first visit → signup
    – and positioning for a specific niche (traders/founders/etc.)

    No hard pitch — think you’re very close and it’d be a shame if this didn’t get traction.

    Happy to take a look

  19. 1

    This is exactly the shift I’ve been noticing — not just that non-devs can build now, but that the cost of trying has collapsed.

    A year ago, this is a 6–12 month commitment. Now it’s a few weeks to get something real in front of users.

    The part I’m still figuring out is what happens after that — building is suddenly the easy part, but getting people to actually use it consistently is a completely different problem.

    Have you found anything that’s worked so far for getting those first users beyond just launching?

  20. 1

    Really enjoyed this — the honesty stands out.

    “I don’t debug code, I debug decisions” is a powerful shift. You clearly built something real, now it’s just a distribution problem.

    You’re at the right point — stop building, start talking to users. That’s where things click.

  21. 1

    This is really impressive, especially getting something shipped without a coding background.

    One thing I’m curious about — how are you thinking about distribution?

    I’m building something in a completely different space (pet memorials), but running into a similar challenge where building the product is one thing, but getting people to actually discover it is a whole different problem.

    Right now I’m leaning heavily into SEO, but even there it feels like there’s this weird “waiting phase” where Google just sits on your pages before doing anything.

    Did you focus more on launching fast + iterating, or did you have a clear distribution plan from the start?

  22. 1

    The adversarial multi-model approach is the real differentiator here, not the no-code story. watsonfoglift’s suggestion to lead with model disagreements is exactly right. “Where do the models disagree and why” is content nobody else is producing. That’s your marketing and your moat in one move.

  23. 1

    This is a fascinating case study in the product-manager-who-codes-through-AI model. What stands out is your point about understanding every architectural decision even without writing the code. That is actually a stronger position than many junior devs who copy-paste without grasping the trade-offs. The 5-stage pipeline approach mirrors how senior engineering teams structure complex systems. You have essentially designed a microservices architecture through pure product thinking. One concern I would flag early: with $18/month AI costs at 10 users, your per-user economics might get brutal at scale. Have you modeled what happens at 1,000 users running concurrent research pipelines? Caching frequently-asked topics and batching similar queries could dramatically cut costs before you hit that wall.

  24. 1

    "This is an incredible story of persistence, SANICE_AI. Building a multi-model 5-stage pipeline without writing a line of code is a huge testament to how AI has lowered the floor for non-technical founders.
    The 'adversarial quality check' stage is a brilliant addition—most people don't realize that cross-referencing models is the only way to kill hallucinations in deep research. This kind of 'Product Manager as Builder' approach would be a perfect entry for the current competition. Entry is $19 and the winner gets a trip to Tokyo.
    Prize pool just opened at $0. Your odds are the best right now. Definitely worth a look while you're navigating the 'nobody knows I exist' stage!
    To answer your question: For the first 10 strangers, manual outreach in niche subreddits (like the ETH trading ones you know well) usually beats SEO in the early days."

  25. 1

    You didn’t build a product, you proved execution is no longer the bottleneck.
    Now the real game starts: figuring out if this solves a problem people actually care enough to pay for.

  26. 1

    The cross-referencing insight is the sharpest thing in this post. Single-model answers are the new "I Googled it" — trusted too quickly and wrong more often than people realize. The multi-model adversarial approach is how serious research actually works, and your trading background explains why you see that so clearly.

    On the auto-publish SEO strategy: I'd be cautious with 2 reports/day. The math is appealing ($18/mo for daily content), but there's growing evidence that auto-generated content without editorial depth gets filtered by both Google and AI search engines. A Zyppy analysis found that content updated within 30 days gets roughly 3x more AI citations — but only when it has genuine authority signals like original data and cited sources. Volume without editorial review risks producing the kind of thin content that search engines are specifically learning to deprioritize.

    What if you flipped it: instead of 2 daily auto-generated reports, publish 2 per week that showcase the multi-model disagreement? "Gemini said X, Claude said Y, here's why the difference matters." That IS your differentiator. Nobody else is showing where the models diverge, and that's where the original insight lives. That kind of content earns links and gets cited because it's genuinely novel.

    For the first 10 strangers: the fastest path I've seen is finding communities where people already have the pain you're solving and contributing without pitching. Options/crypto Twitter, fintech forums, trading subreddits. You're from that world — you know the language. One thoughtful thread analyzing a real trade using your multi-model approach would be worth more than 60 auto-published reports for building trust.———

    1. 1

      Really appreciate you taking the time to write this — genuinely one of the most useful replies I've gotten.
      You're spot on about the auto-publish risk. I've been so heads-down building the pipeline that I hadn't stepped back to ask whether volume was actually serving the brand or just filling a content calendar. The Zyppy data point about 30-day freshness is interesting — I'll dig into that.
      The "show where the models diverge" idea honestly stopped me in my tracks. That IS the differentiator and I've been burying it inside the pipeline instead of making it the headline. Something like "Claude recommended holding, Gemini said sell, Grok flagged a regulatory risk nobody mentioned — here's why the disagreement matters more than any single answer." That's content nobody else can produce because nobody else is running adversarial multi-model pipelines. I'm going to test this format this week.
      On the community angle — yeah, I come from the options/crypto side (still run live bots on Deribit). A thread breaking down a real trade through the multi-model lens would probably land way better than any amount of auto-generated SEO content. The trust math is completely different when you're showing your own skin in the game.
      Going to scale the auto-publish down to 2-3/week and make each one actually showcase the model disagreements. Quality over quantity. Thanks for the push in the right direction.

Trending on Indie Hackers
I shipped a productivity SaaS in 30 days as a solo dev — here's what AI actually changed (and what it didn't) User Avatar 281 comments 85% of visitors leave our pricing page without buying. sharing our raw funnel data User Avatar 48 comments I built a tool that shows what a contract could cost you before signing User Avatar 46 comments Are indie makers actually bad customers? User Avatar 42 comments I Found Blue Ocean in the Most Crowded Market on the Internet User Avatar 39 comments Tech is done. Marketing is hard. Is a 6-month free period a valid growth hack? User Avatar 27 comments