1. Ask HN: The government of my country blocked VPN access. What should I use?

Total comment counts : 90

Summary

To bypass censorship, obtain VPN software and obfuscated configurations from providers who distribute them in hard-to-block ways. Obfuscation layers (e.g., obfs4proxy with a pre-shared key, or pluggable transports like Shapeshifter) can help defeat DPI; the VPN provider must support these protocols. Long-term detection remains a challenge, as statistical analysis can reveal VPN use even with obfuscation. The article cautions about local conditions (e.g., Indonesia) but does not speculate. It endorses Mullvad as a trustworthy, technically competent option in this space.

Top 1 Comment Summary

Article discusses censorship circumvention for a VPN provider in the early 2020s. Key steps: obtain VPN software and configs, distributed via hard-to-block channels and obfuscated packages; S3 and local partners may assist distribution in censorship-prone countries. Use obfuscation like obfs4proxy (with a pre-shared key to hide traffic and the handshake) or Shapeshifter (Operator Foundation) or other pluggable transports; the provider must support these protocols. Long-term evasion is hard due to traffic analysis and state DPI. Endorses Mullvad as a trustworthy option in this niche (not affiliated).

Top 2 Comment Summary

The piece notes the forum isn’t a good source for censorship circumvention. It says most members rely on their own VPNs (Tor, Tailscale/WireGuard-based, Mullvad) and lack experience with bypassing censorship. It advises seeking VPNs advertised for China, Russia, or Iran—described as cutting-edge tech that will work, though they may be less privacy-friendly than Mullvad.

2. My startup banking story (2023)

Total comment counts : 4

Summary

An aspiring founder naive about banks opens a Chase business account with a $20k personal loan, then sees large seed, Series A, and Series B deposits. A local Chase banker, “Alex,” repeatedly calls after big deposits, but the founder ignores him. For two years, Chase remains the same; eventually a VP of Finance notes about $35M in cash and suggests moving to Silicon Valley Bank for startup-friendly services. The founder, initially treating banks as interchangeable, begins to recognize the need for startup-specific banking.

Top 1 Comment Summary

A reference to a March 2023 Hacker News discussion titled “My startup banking story” (item 35157959) that drew 257 comments.

Top 2 Comment Summary

An unhappy customer rails against Chase Bank, urging others to bank elsewhere. They concede Mitch says Chase isn’t the problem, but contend that the bank’s archaic controls make doing business with them tedious and unenjoyable.

3. Uncertain

Total comment counts : 12

Summary

Humans overvalue certainty, and software culture rewards confident answers. The piece argues for embracing uncertainty in code, not treating decisions as strictly true/false. It reintroduces Uncertain, based on a 2014 UW/Microsoft paper, to encode probabilistic data in the type system. In Swift (via a GitHub port), comparisons yield Uncertain, GPS errors follow a Rayleigh distribution, and computations form a graph using sampling with SPRT to adapt sample counts. It advocates incremental migration of uncertain calculations, cautions about sampling costs, and recommends profiling (Instruments.app) and starting small by testing GPS-related issues.

Top 1 Comment Summary

Interval arithmetic has been reinvented many times, with implementations in Boost and flint. The author is puzzled that, despite these reinventions, the approach hasn’t become mainstream and expresses interest in hearing from anyone who used it in production and later found it a bad idea.

Top 2 Comment Summary

GPS is often treated with circular uncertainty only under open-sky, long-time fixes, but real uncertainty is more complex and measured in multiple ways. This matters because localization shouldn’t be a single point in scenarios like autonomous vehicles, where non-circular multipath effects dominate. Going deeper into these models can resemble implementing particle filters and related methods.

4. Some thoughts on LLMs and software development

Total comment counts : 17

Summary

Before a few weeks away from the site, the author reflects on LLMs/AI in software. Surveys miss real workflows; many users rely on auto-complete, but those who gain value prefer LLMs that read/edit code directly. The future of programming is uncertain, encouraging experimentation and sharing workflows. AI is a bubble likely to pop, yet may deliver meaningful value. He plans to attend GOTO Copenhagen. Hallucinations are a feature, not a bug; ask questions multiple times, compare answers, verify results, and treat LLMs like tolerant engineering systems rather than perfect calculators.

Top 1 Comment Summary

The author finds LLMs boost productivity beyond autocomplete, though worries about over-reliance. They’ve had success using Test Driven Development with Claude Sonnet (and GPT-5), delivering features in discrete red/green cycles. They note few discuss this approach; TDD experts aren’t the same people who push aggressive LLM agenting. The real upside, they argue, is using multiple agents and layered prompts at different abstraction levels, not single-tool autocomplete. A fertile, caution-filled field is opening for “how to write software” with these tools.

Top 2 Comment Summary

Argues that while other engineering fields must accept world variability, LLMs push software into non-determinism. It contends software engineering already has ways to enforce determinism, and moving away from that is a regress/ backwards step.

5. Building your own CLI coding agent with Pydantic-AI

Total comment counts : 4

Summary

Custom coding agents differ from chatbots: they can read code, run tests, and update a codebase. Ben O’Mahony describes building a bespoke CLI Coding Agent using the Model Context Protocol (MCP) with Pydantic-AI, deployed on AWS Bedrock (Claude Sonnet 4). The tool is tailored to their internal project context and development standards (testing, docs, code reasoning, filesystem operations), enabling the agent to read code, execute pytest, and apply changes. The architecture is modular and extensible via MCP servers. Early issues included the agent suggesting test changes; they steered it toward minimal fixes in line with Test Driven Development.

Top 1 Comment Summary

Author recently switched from a personal agents library to pydantic ai, enjoying integration with langfuse. They seek efficient methods to evaluate coding AI agents, comparing components: tool implementations (e.g., diff vs full file edits), prompt designs, model choices (Claude vs GPT-5 variants), sub-agents, and task lists. They want ablation-style metrics that include both success and cost. While many setups ‘kinda work,’ they’re probing for notable improvements or pitfalls. They note Claude code CLI feels slightly better, but lack objective A/B comparisons.

Top 2 Comment Summary

User praising Pydantic-AI as delightful for their long-running coding agent CLI project, noting it makes building agents easy while lower-level APIs can be painful. They point to the Rune Code GitHub project and say they switched to Pydantic not just for features, but to escape LiteLLM’s poor documentation/experience, hoping Pydantic offers a better universal interface.

6. Are OpenAI and Anthropic losing money on inference?

Total comment counts : 55

Summary

This piece probes AI inference costs with napkin math. Using a 72×H100 cluster ($2/hour each) and 9 model instances, it estimates throughput under MoE at 1.44M input tokens/s per instance and 46.8B input tokens/hour, versus ~46.7M output tokens/hour. Input tokens are cheap ($0.003 per million) but outputs are costly (~$3.08 per million), a ~1000× asymmetry driven by memory bandwidth vs compute. Long contexts can become compute-bound, raising costs 2–10×; Claude Code limits to 200k tokens to stay memory-bound. Overall, economics look solid for developers and daily power users.

Top 1 Comment Summary

The modeling shows inference can achieve 50%+ gross margins, mainly depending on GPU depreciation and host utilization. The key margin question is whether training costs are included. If training costs aren’t capitalized and amortized, margins appear strong; if they are capitalized and accounted for, margins deteriorate significantly.

Top 2 Comment Summary

Sam Altman said most current work focuses on model inference, and the company is profitable on inference. He added that if they didn’t have to pay for training, they would be very profitable, signaling that training costs are a major drag and that profitability now mainly comes from inference revenue.

7. Launch HN: Dedalus Labs (YC S25) – Vercel for Agents

Total comment counts : 4

Summary

Dedalus Labs unveils a streamlined MCP-based tool-calling platform for LLMs. Frustrated by early attempts to build a stateful code-execution sandbox (manual server setup, auth, and cloud config), they offer a single API endpoint to deploy streamable HTTP MCP servers. OpenAI-compatible SDKs let you drop MCP-powered tools into your codebase. A sample shows creating a Dedalus client and runner to run prompts with tools, MCP servers, and multiple models, streaming results. They acknowledge MCP auth shortcomings, plan a secure auth solution, an MCP marketplace, and MIT-licensed SDKs; seeking feedback.

Top 1 Comment Summary

Congratulating the launch, the reviewer finds it interesting and praises how easily local code tools can be combined with remote MCP servers. They see promise in the marketplace but urge curation, noting many servers lack descriptions and link to private GitHub repos. Overall, they like the vision and look forward to trying it.

Top 2 Comment Summary

Congratulates the launch and asks whether tool inputs and outputs must be stored server-side while waiting for responses. Also notes they’re building a specialized coding agent for integrations and prefer stateless APIs to avoid storing user code.

8. VLT observations of interstellar comet 3I/ATLAS II

Total comment counts : 3

Summary

arXivLabs is a framework that lets collaborators develop and share new arXiv features directly on the site. It upholds openness, community, excellence, and user data privacy, and only partners who meet these values are engaged. If you have a project idea to benefit the arXiv community, learn more about arXivLabs. You can also get operational status updates via email or Slack.

Top 1 Comment Summary

The article notes that this is the third detected interstellar object and asks whether detections are rising because interstellar visitors are more common or because detection techniques have improved in recent years.

Top 2 Comment Summary

I don’t have an article to summarize—only the line “So is it a spaceship or not?” Please paste the article text. If you want a quick gist of that line: it poses a question about whether a given object should be classified as a spaceship.

9. A forgotten medieval fruit with a vulgar name (2021)

Total comment counts : 4

Summary

Medlar (Mespilus germanica) was a medieval European favorite despite its rotten-looking fruit. In 2011, archaeologists found 19 unusually well-preserved medlar seeds in a Roman cesspit at Tasgetium (Eschenz, Switzerland), preserved by waterlogged, oxygen-free conditions. Known for about 900 years as the “open-arse” (and French names like cul de chien), the medlar spread from Greek/Roman roots to Charlemagne’s gardens and Chaucer to Shakespeare. It peaked in the 1600s, then declined, vanishing from most markets by the mid-20th century. Today it survives as a historical curiosity. Origins may lie in Western Asia about 3,000 years ago; December harvest.

Top 1 Comment Summary

Someone finds it intriguing, wants to try one now, noting it’s the sort of thing home orchard societies would rave about and likely grown right beside a pawpaw tree.

Top 2 Comment Summary

In suburban Los Angeles, medlar trees are common among Iranian-American and Armenian-American families, and the fruit is sold at Paradise Nursery in Chatsworth.

10. In Search of AI Psychosis

Total comment counts : 4

Summary

AI psychosis is discussed as a possible phenomenon where heavy chatbot use may trigger psychotic-like experiences. The article questions prevalence, whether bots cause it or reveal preexisting conditions, and whether psychosis is biological, treating the topic as exploratory and using analogies rather than a firm thesis. It estimates yearly incidence around 1 in 10,000 to 1 in 100,000 (loose to strict). A Lenin-mushroom anecdote shows how people accept official-sounding but absurd narratives, illustrating how weak world models and the aura of AI authority can shape belief.

Top 1 Comment Summary

The writer notes witnessing the same phenomenon twice among acquaintances, arguing that even before AI, social media created automated bubbles via algorithms that curate and reinforce what people see.

Top 2 Comment Summary

Marketing emphasizes implied product capabilities, while the broader community labels skeptics as crazy. A previously discussed Hacker News article is referenced. Huge funding backing these uses stifles honest discussion of problems, creating a culture that makes people feel crazy. As a result, arguments rarely gain traction unless they rely on thought-terminating mentions of imagined current or future capabilities and their implications.