1. How I code with AI on a budget/free
Total comment counts : 51
Summary
The author discusses using various free AI models for coding tasks, emphasizing the advantages of accessing multiple platforms for different perspectives. They highlight Claude.ai for planning and solving complex problems, while cautioning about Grok’s potential misinformation issues. The author prefers using AI in web chats for problem-solving due to their effectiveness over agent frameworks that may convolute requests. Additionally, they introduce AI Code Prep, a tool that streamlines code context for better AI performance, allowing users to curate the necessary files for optimal AI responses, addressing the limitations of typical coding agents.
Top 1 Comment Summary
The article addresses confusion about navigating a post that has additional content on pages 2 and 3, which can be accessed using an arrow at the bottom of the page.
Top 2 Comment Summary
The author discusses the rapid advancements in AI models, highlighting favorites like GLM-4.5 and Kimi K2. They created a context helper app, aimed at improving workflow between AI chat interfaces and an IDE, featuring preset buttons for common tasks and context management. The app, designed for personal efficiency, allows users to generate prompts for an AI coding agent, “Cline,” and facilitates coding by integrating various AI models. The author emphasizes that users can leverage these tools for free coding, optimizing outputs while minimizing costs associated with AI API usage.
2. Fight Chat Control
Total comment counts : 49
Summary
The “Chat Control” proposal mandates the scanning of all private digital communications, threatening privacy rights and digital security for EU citizens. It entails automatic scans of encrypted messages without consent, which undermines fundamental rights to privacy and data protection. Critics argue it creates mass surveillance, misidentifies innocent content, and fails to enhance child protection. The plan may lead to unreliable AI reports, harm innovation, and set a dangerous surveillance precedent, while EU politicians remain exempt from such monitoring. Fears arise that this could enable authoritarian regimes to implement similar measures globally.
Top 1 Comment Summary
The European Parliament has proposed a law mandating that online pornography must include age verification tools, with violations punishable by up to one year in prison. This provision, known as “Amendment 186,” was a last-minute addition and has received little attention from media and advocacy groups. While the amendment has passed its first reading, it has now been sent back to the Council of the European Union, and a second reading appears unlikely.
Top 2 Comment Summary
The author expresses frustration, feeling that everyone, including the left, supports a system they disagree with. They believe democracy is over for them and feel powerless, other than donating monthly to GrapheneOS.
3. Try and
Total comment counts : 56
Summary
The article discusses the phrase “try and,” which can be followed by various structures, including noun phrases and verb phrases. While it is similar in meaning to “try to,” it is often considered prescriptively incorrect. “Try and” has been more prevalent in British English, although it appeared as early as the late 1500s. Key syntactic properties of “try and” distinguish it from regular coordination, such as its inability to be reordered and the “bare form condition,” requiring that both verbs remain uninflected. Dialect variations show differing acceptability for inflected forms.
Top 1 Comment Summary
The author initially mistook a site for a new JavaScript syntax proposal but found it instead discusses linguistic phenomena. They discovered a page about their favorite phenomenon, “what all,” which featured a header change to “Who all says this?” The author appreciated this detail.
Top 2 Comment Summary
The Japanese phrase “Xて見る” is used to express the idea of trying to do something, translating to “we’ll try [X]ing.” This construction can be interpreted as “we’ll see [what happens] when we [X]” or simply “we’ll try and [X].” It highlights the exploration of outcomes through the act of trying.
4. Abusing Entra OAuth for fun and access to internal Microsoft applications
Total comment counts : 19
Summary
The blog details the author’s experience gaining access to over 22 internal Microsoft services due to vulnerabilities in Microsoft’s authentication system. While distracted from documentation work, the author explored the aka.ms URL shortener and discovered various Microsoft login screens. By experimenting with login attempts, they accessed an internal Engineering Hub meant for Microsoft employees. The author revealed this misconfiguration allowed their personal Microsoft account to receive access permissions erroneously. After reporting the finding to Microsoft’s Security Response Center, they questioned the potential vulnerabilities across other Microsoft services, mapping a significant number of associated domains.
Top 1 Comment Summary
The author criticizes Microsoft documentation as confusing and inadequate, particularly in the context of building a single-tenant SSO login with Entra ID. They struggled to navigate through convoluted information and jargon, finding it difficult to locate practical guides or helpful resources, which contributed to a frustrating experience.
Top 2 Comment Summary
A former Microsoft PM emphasizes the importance of validating both the tenant and subject when authorizing multi-tenant applications. The recommended approach is more nuanced than just checking the “iss” or “tid” claim; it’s critical to prevent unauthorized access by validating the subject alongside the tenant. This can be done by using a combined key (e.g., tid+oid) or verifying both before granting access. For further details, reference the provided link on claims validation.
5. Abogen – Generate audiobooks from EPUBs, PDFs and text
Total comment counts : 20
Summary
Abogen is a text-to-speech tool that quickly converts EPUB, PDF, or text files into audio with synchronized subtitles. It’s effective for creating audiobooks and voiceovers for platforms like Instagram and YouTube. The tool also features a voice mixer for custom voices and a queue mode for batch processing. Abogen automatically adds chapter markers and metadata tags for audiobooks. It allows easy installation, including automatic Python setup. Users can troubleshoot errors via command line and personalize outputs using various voice models. For performance, results may vary based on hardware specifications.
Top 1 Comment Summary
The article discusses the risks for indie authors using AI-generated audiobooks for distribution, highlighting that readers may dismiss titles that hint at AI usage. To combat this, the author has opted to hire voice actors with unique accents and backgrounds, particularly those for whom English is a second language, as this approach enhances authenticity and distinguishes their work. This method not only avoids negative perceptions associated with AI but also appeals to a broader audience seeking diverse narratives.
Top 2 Comment Summary
The article proposes a pipeline connecting Calibre-Web, Abogen, and Audiobookshelf to create audiobooks. Calibre-Web provides the books, Abogen converts them into audio, and Audiobookshelf accommodates the audio files. This solution aims to support the hearing impaired.
6. GPT-OSS vs. Qwen3 and a detailed look how things evolved since GPT-2
Total comment counts : 16
Summary
OpenAI has released two new open-weight large language models, gpt-oss-120b and gpt-oss-20b, marking their first such models since GPT-2 in 2019. These models can run locally on compatible hardware due to optimizations. Unlike prior models, gpt-oss focuses on enhanced design choices rather than drastic architectural changes. The article explores model comparisons, architecture details, and benchmarks against GPT-5. The 20B model is operable on consumer GPUs with 16 GB of RAM, while the 120B model requires higher specifications. OpenAI’s advancements continue to leverage the transformer architecture, with no major shifts observed.
Top 1 Comment Summary
The gpt-oss model features a blend of familiar optimizations like RoPE and MoE, along with unique choices such as small sliding-window sizes. Notably, the MXFP4 quantization allows significant models (20B on 16 GB cards, 120B on advanced GPUs) to run efficiently, potentially facilitating more experimentation for indie developers and researchers. A key question remains whether gpt-oss’s focus on reasoning will lead to a split in model development into specialized “reasoners” and “knowledge bases,” influencing future system architecture.
Top 2 Comment Summary
Qwen3 outperforms its competitors in local testing, particularly excelling in prompt adherence and sounding more natural. In contrast, the gpt-oss (120 billion parameters) underperformed in logical puzzles. The differences in performance may result from factors such as training techniques, data quality, model dimensions, and the balance between large and small experts.
7. Writing simple tab-completions for Bash and Zsh
Total comment counts : 17
Summary
The blog post by Li Haoyi discusses setting up tab completions in both Bash and Zsh, crucial for enhancing user experience in CLI tools. It highlights the challenge of differing APIs between the two shells and the benefit of displaying descriptions with completions, which Bash lacks by default. The author provides a guide based on implementing tab completion in the Mill build tool, illustrating how to create completion functions for each shell, register them, and handle descriptions for better usability. This setup aims to improve cross-shell compatibility and enrich the tab-completion experience for users.
Top 1 Comment Summary
In Fish shell, you can easily generate command completions by running fish_update_completions
, which parses man pages on your system and stores completion files in ~/.cache/fish/generated_completions/
. If a man page is poorly written or missing, you can create your own completions in a simple format. For guidance, refer to the official documentation: Fish Completions. An example completion for the curl
command is provided, demonstrating how to specify short and long options along with descriptions.
Top 2 Comment Summary
The author expresses frustration with the current behavior of bash completion, which blocks file or directory name completion when it deems it inappropriate for the cursor’s position. They argue that completion should always default to allowing filename input instead of preventing it. This irritates them to the point of considering disabling all completion scripts, as it disrupts decades of muscle memory and resembles poor user experience in UI design that hinders user actions.
8. Show HN: Engineering.fyi – Search across tech engineering blogs in one place
Total comment counts : 50
Summary
error
Top 1 Comment Summary
The article notes that there are only 16 companies listed, indicating a lack of options. The author requests to have their blog added to this list, providing a link to the ClickHouse engineering blog for consideration.
Top 2 Comment Summary
The author expresses nostalgia for the era of RSS feeds, where users could independently aggregate their news and blogs without the distractions posed by platforms like Substack and Medium.
9. Booting 5000 Erlangs on Ampere One 192-core
Total comment counts : 5
Summary
Underjord, an artisanal consultancy focusing on Elixir and Nerves, discusses advancements in running virtual Linux IoT devices. With an Ampere One machine featuring 192 cores and 1 TB of RAM, the goal is to maximize performance beyond the previous count of 500 devices. They introduce a new bootloader, little_loader, created to streamline the boot process for ARM64 virtual machines. The implementation of KVM has significantly improved boot times and reduced memory usage. However, they encountered compilation issues that remain unresolved. Overall, the project aims to enhance the efficiency of deploying virtual machines in a robust environment.
Top 1 Comment Summary
The term “5000 Erlangs” actually refers to 5000 instances of an Erlang interpreter, not the unit of measure for voice calls. An Erlang, as a unit, represents one voice call for one hour.
Top 2 Comment Summary
The article discusses a $5000 machine designed for cloud providers and telcos, highlighting its potential for hosting Erlang VMs at a very low cost. It emphasizes the machine’s core architecture as a significant advancement in circumventing the limitations of Moore’s Law, suggesting a shift towards smaller cores rather than larger language models (LLMs).
10. Curious about the training data of OpenAI’s new GPT-OSS models? I was too
Total comment counts : 14
Summary
The article informs users that they need to switch to a supported browser to continue accessing x.com. It provides a link to a list of supported browsers in the Help Center, and includes mentions of the site’s Terms of Service, Privacy Policy, Cookie Policy, and Imprint. The content is attributed to X Corp, dated 2025.
Top 1 Comment Summary
The article discusses an OP’s analysis of programming language frequencies in generated texts, suggesting that the model was heavily trained on Perl based on the results. However, the response argues that the findings highlight Perl’s versatility rather than providing insights into the training data, noting that 93% of inkblots can be valid Perl scripts.
Top 2 Comment Summary
The article criticizes a Twitter analysis for lacking rigor and scientific validity, claiming it resembles clickbait rather than a serious study. It questions the method of generating the 10 million examples and points out flaws in the classification of programming languages. The author finds it unacceptable to overlook anomalous results, believing any inconsistency signals broader methodological issues that render the data unreliable. Overall, it calls for higher standards in data analysis and an acknowledgment of potential errors that could compromise the findings.