by concensure on 3/31/2026, 3:50:07 AM
Comments
by: gbnwl
OK I'll take the opportunity to be the the first non-self-promotional comment on this thread now that concensure and rohan2003 have done their ads.<p>Based on this post's current position on the front page it kind of seems to fall in line with a pattern we've all been seeing the past few months: HN is finally majority onboard with believing in the usefullness of coding agents and is celebrating this by rediscovering each and every personal "I improved CC by doing [blank] thing" from scratch project.<p>That's all whatever. Fine. But what I'm really curious about is does the HN community actually look at the random LLM-generated statistic-vomit text posted by creators like this and find themselves convinced?<p>I ask because if you're new to random stat vomit you're going to find yourself having to deal with it all the time soon, and I've yet to find good meta discussions about how we find signal in this noise. I used to use HN or selected reddit community upvotes as a first pass "possibly important" signal, but its been getting worse and worse, illustrated by posts like this getting upvoted to the top without any genuine discssion.
3/31/2026, 4:40:54 AM
by: concensure
The Problem: Most RAG-based coding tools treat code as unstructured text, relying on probabilistic vector search that often misses critical functional dependencies. This leads to the "Edit-Fail-Retry" loop, where the LLM consumes more time and money through repeated failures.<p>The Solution: Semantic uses a local AST (Abstract Syntax Tree) parser to build a Logical Node Graph of the codebase. Instead of guessing what is relevant, it deterministically retrieves the specific functional skeletons and call-site signatures required for a task. The Shift: From "Token Savings" to "Step Savings"<p>Earlier versions of this project focused on minimizing tokens per call. However, our latest benchmarks show that investing more tokens into high-precision context leads to significantly fewer developer intervention steps. Latest A/B Benchmark (2026-03-27)<p><pre><code> Provider: OpenAI (gpt-4o / o1) Suite: 11-task core suite (atomic coding tasks) Configuration: autoroute_first=true, single_file_fast_path=false </code></pre> Run Variant Token Delta (per call) Step Savings (vs Baseline) Task Success Baseline (2026-03-13) -18.62% — 11/11 Hardened A +8.07% — 11/11 Enhanced (2026-03-27) -6.73% +27.78% 11/11 Key Takeaways:<p><pre><code> The ROI of Precision: While the "Enhanced" run used roughly 6.73% more tokens than the baseline per request, it required 27.78% fewer steps to reach a successful solution. Deterministic Accuracy: By feeding the LLM a "Logical Skeleton" rather than fuzzy similarity-search chunks, we eliminate the "lost in the middle" effect. The agent understands the consequences of an edit before it writes a single line. Context Density: We are effectively trading cheap input tokens for expensive developer time and agent compute cycles. </code></pre> Detailed breakdowns of the task suite and methodology are available in docs/AB_TEST_DEV_RESULTS.md.
3/31/2026, 3:50:07 AM
by: thestack_ai
[dead]
3/31/2026, 5:27:19 AM
by:
3/31/2026, 4:30:53 AM
by:
3/31/2026, 4:25:27 AM
by: rs545837
[flagged]
3/31/2026, 4:38:43 AM