1. A QR code that sends you to a different destination – lenticular and adversarial

Total comment counts : 20

Summary

error

Top 1 Comment Summary

The article discusses a hypothetical attack involving a dynamic QR code in public settings, where the code changes based on real-time user analysis:

  1. Setup: The QR code is designed as a half/half code, allowing for dynamic alterations. A system uses external inputs, like camera data, to assess user characteristics and adjust the QR code’s colors to influence which users are likely to scan it.

  2. Malicious Uses:

    • Feedback Manipulation: The code could present different feedback forms to different demographics to skew results either positively or negatively.
    • Biased Prize Winning: It could offer a chance to win a prize but bias the probability based on identifiable traits like race, age, or attractiveness.
    • Targeted Attacks: Specific individuals could be directed to different WiFi networks or altered payment pages for malicious purposes.
  3. Effectiveness: The attack would be less effective in static environments, and the author doubts that most people would notice the QR code changing dynamically, especially under typical public lighting conditions.

The article concludes that while the concept is intriguing, its practical application in static settings might be limited.

Top 2 Comment Summary

The article describes a website where users can create and test an ambiguous QR code. This QR code merges two different QR codes into one image by splitting cells diagonally when the patterns differ, or filling them solidly when they match. The technique leverages the high error correction of QR codes (level ‘H’) to allow scanners to read either of the two embedded URLs, with a noted tendency to favor the second URL. The creator provides a link to the website for people to try this out themselves.

2. Tilde, My LLVM Alternative

Total comment counts : 26

Summary

Yasser is developing Tilde (TB), an alternative to LLVM, aiming to address issues like slow compilation and excessive bloat by creating a new compiler backend library. His motivations stem from LLVM’s accumulated inefficiencies over the past 20 years. Key points include:

  • Performance: Early tests show TB’s preprocessor is twice as fast as Clang’s and can handle multiple translation units without threading issues.
  • Design Philosophy: TB focuses on building tools as libraries with clean, decoupled APIs that are easy to modify and extend, promoting reuse and innovation in compiler technology.
  • Collaborations: Yasser is working with others on integrating TB with projects like Odin and miniVM, and he’s aiming for his C compiler, Cuik, to be self-hosting by March 2024.
  • Development Status: TB is progressing with an optimizer that includes various optimization techniques like SSA construction and code motion. A demo is available for testing.

Yasser emphasizes that TB isn’t just another LLVM clone but a new approach to compiler backends, focusing on efficiency and modularity. He invites discussion and contributions to his project, highlighting the need for better, faster compiler tools in the programming community.

Top 1 Comment Summary

The article discusses the development of a new compiler backend called Tilde (or TB). The author expresses frustration with the existing compilers, particularly due to their slow compilation times and large, complex codebases. This situation is likened to the origins of LLVM, which was also born out of dissatisfaction with GCC. The text notes that initially, LLVM was faster but less optimized than GCC, but over time, both compilers improved due to competitive pressures. Additionally, the author questions the current usage of other compilers like those from Microsoft, Intel, or Borland in industries like gaming, wondering if the industry has largely shifted to using LLVM or GCC.

Top 2 Comment Summary

The article discusses MLIR (Multi-Level Intermediate Representation), a project by Chris Lattner, who is also known for creating LLVM. Here are the key points:

  • MLIR as an Alternative: MLIR serves as an alternative framework where LLVM can be one of several backends. This means that while LLVM can be used for lowering Intermediate Representation (IR) to machine code, it’s not mandatory.

  • Benefits of MLIR: The architecture of MLIR allows for more extensive IR processing before reaching LLVM or any other backend. This pre-processing can reduce the workload on LLVM by handling optimizations and transformations at a higher level.

  • Challenges with LLVM: Chris Lattner has noted that LLVM is very efficient at converting IR to machine code. However, the process of converting from language-specific Middle Intermediate Representation (MIR) to LLVM IR often results in either:

    • Loss of Information: Losing important language-specific details in the translation.
    • Excessive Information: Generating an overwhelming amount of IR, much of which might be redundant or unnecessary, which LLVM then has to clean up and optimize.
  • Overall: MLIR aims to address these issues by providing a platform where IR can be processed at multiple levels before being handed off to a backend like LLVM, potentially making the overall compilation process more efficient and less prone to losing critical information or overburdening the backend with unnecessary data.

3. The Most Mario Colors

Total comment counts : 31

Summary

The article discusses an in-depth analysis of the color schemes used in the logos of Mario video games. The author examined over 40 Mario game logos to identify patterns in the color sequences used for Mario’s name:

  1. Logo Styles: There are two main logo styles for Mario games - one used for side-scrolling games starting from the arcade era, and a newer polygonal style introduced with “Super Mario World” for 3D games.

  2. Color Analysis: Each logo was analyzed to determine which colors were most commonly used for each letter in “Mario”. Despite variations, some colors like red for “M” and yellow for “I” were notably frequent.

  3. Patterns and Sequences: The author looked for recurring color sequences across games, finding that while there are common sequences, there isn’t a single “most Mario” color scheme that stands out universally. However, sequences like red-green-yellow-blue were quite common.

  4. Notable Observations:

    • Some games like “Super Mario 64” and the “Super Mario Galaxy” series share a common sequence.
    • Recent games seem to follow a pattern set by “Super Mario 3D Land”.
    • The analysis even extended to the logo for “The Super Mario Bros. Movie”, which fit the common color pattern.
  5. Conclusion: While the analysis didn’t serve a practical purpose, it was an interesting exploration into branding consistency or the lack thereof in the Mario franchise. The author humorously questions the utility of this analysis but provides a detailed look into a niche aspect of Mario’s visual identity.

Top 1 Comment Summary

The article discusses a flaw identified in a previous analysis of the color sequence in Mario games. The author initially intended to write a comment on this flaw but ended up creating a full article exploring the entropy of the Mario color sequence. The piece invites readers to delve into this analysis by providing a link to the detailed discussion on the entropy of Mario’s color patterns.

Top 2 Comment Summary

The article discusses a study on the frequency of color choices in video game titles, specifically focusing on how the placement of the word “Mario” in game titles influences color selection, particularly for the letter ‘M’. The commenter appreciates the study for being both trivial and insightful, suggesting further analysis on whether the first letter or significant letters in different title formats are colored red.

4. Weierstrass’s Monster

Total comment counts : 16

Summary

The article discusses the historical development and challenges faced by calculus, focusing on the contributions of Karl Weierstrass. Here are the key points:

  • Background on Calculus: Invented in the 17th century, calculus was initially built on intuition rather than formal definitions, leading to foundational issues by the 19th century.

  • French vs. German Approach: French mathematicians continued using calculus pragmatically for applications in physics, while German mathematicians like Weierstrass sought to rigorously redefine and test its foundations.

  • Weierstrass’ Contribution: Karl Weierstrass, after a delayed start in mathematics, introduced a function in 1872 that was continuous everywhere but differentiable nowhere, challenging the long-held beliefs about the nature of functions in calculus. This function was constructed by summing infinitely many cosine waves, creating an infinitely jagged curve.

  • Reactions to Weierstrass’ Function: His function was met with resistance and skepticism by prominent mathematicians like Henri Poincaré and Charles Hermite, who saw it as an affront to common sense and practical mathematics.

  • Implications: Weierstrass’ work helped to establish more rigorous mathematical foundations by showing that functions could behave in ways previously thought impossible, thus necessitating a reevaluation of basic calculus concepts like continuity and differentiability.

This article highlights how Weierstrass’ function, initially seen as a mathematical oddity, played a crucial role in the development of modern mathematical theory, pushing for a deeper understanding and formalization of calculus.

Top 1 Comment Summary

The article discusses the Dirichlet function, which is a mathematical function defined as ( f(x) = 1 ) if ( x ) is rational and ( f(x) = 0 ) if ( x ) is irrational. This function is notable because it is continuous nowhere, meaning it has no points of continuity over the real numbers. The author highlights its unique property of being discontinuous everywhere, making it a significant counterexample in calculus. Additionally, a modified version of this function, ( g(x) = x ) if ( x ) is rational and ( 0 ) otherwise, is mentioned. This new function, ( g(x) ), is continuous at exactly one point, ( x = 0 ), and nowhere else, showcasing another intriguing mathematical phenomenon.

Top 2 Comment Summary

The article discusses a French book titled “Les contre-exemples en mathématiques” which is recommended for French-speaking students aiming for undergraduate or engineering studies. The book focuses on mathematical counterexamples, particularly in continuity, where many examples use Weierstrass functions to illustrate points, which the author finds somewhat repetitive but useful. The book serves as an excellent resource for understanding why certain mathematical concepts work in one direction but not the other.

5. Coping with dumb LLMs using classic ML

Total comment counts : 15

Summary

The article discusses an approach to evaluate and improve search relevance in e-commerce using a local Large Language Model (LLM) instead of relying on expensive services like OpenAI. Here’s a summary:

  1. Objective: The author aims to use a local LLM to assess which products are more relevant to a search query, comparing these assessments with human judgments from an e-commerce dataset (WANDS from Wayfair).

  2. Methodology:

    • The LLM makes decisions based on individual product attributes like description, title, etc., and these decisions are then combined to make a more informed choice.
    • The process involves creating prompts for different product attributes and checking the consistency of the LLM’s decisions by reversing product positions in comparisons (LHS/RHS).
  3. Experiments:

    • Various permutations are tested, including double-checking decisions and allowing the LLM to decide “Neither” when it can’t choose. These experiments show trade-offs between precision and recall.
    • An “uber prompt” combining all attributes was tested, along with individual attribute decisions to form an ensemble decision-making approach.
  4. Machine Learning Application:

    • The author recognizes that this process can be framed as a machine learning problem where:
      • Features are the individual LLM predictions for each attribute.
      • The label to predict is the human preference.
    • A decision tree classifier from Scikit-learn was used to predict human preferences based on LLM assessments, with thresholds set to balance precision and recall.
  5. Results:

    • The LLM’s accuracy in predicting relevance improved with certain configurations, reaching up to 90.76% precision in some tests.
    • The process involves collecting thousands of LLM evaluations, which were used for both testing and training the classifier.
  6. Conclusion:

    • The approach demonstrates potential for using local LLMs for cost-effective and rapid evaluation of search relevance, with the flexibility to tune the model’s decision-making process to match human preferences more closely.

This method could serve as a preliminary step in search relevance tuning, reducing the need for human evaluators and providing a scalable, less resource-intensive alternative for ongoing search quality improvements.

Top 1 Comment Summary

The article discusses the pitfalls of hastily adopting new, complex machine learning models without first considering simpler, well-established baseline systems. The author shares a personal experience from participating in machine learning competitions where a straightforward approach using Named Entity Recognition (NER) with Conditional Random Fields (CRF) outperformed many sophisticated deep learning solutions. This experience underscores the importance of not underestimating the effectiveness of traditional methods and the complexity involved in deploying newer models like deep learning and large language models (LLMs). The author suggests that these advanced models are often applied without proper comparison to robust baseline models, potentially leading to suboptimal results.

Top 2 Comment Summary

The article discusses the author’s experience with using various Large Language Models (LLMs) like Sonnet 3.5, GPT 4o, Llama 3.2, and Qwen2 VL for Optical Character Recognition (OCR). While these models were effective at text extraction, they struggled with accurately identifying bounding boxes for text, often providing incorrect coordinates. The author tried adjusting for potential issues like image resizing by using percentage-based coordinates, but this did not solve the problem.

Frustrated, the author reverted to using older, specialized OCR models like PP-OCR, which, although slightly less accurate in text extraction compared to the LLMs, were highly reliable for bounding box detection and much more efficient in terms of resource usage. The author questions the capability of current LLMs for tasks requiring precise spatial recognition, wondering how companies like Anthropic and OpenAI manage to develop applications needing such accuracy when their models fail in this aspect.

6. UI is hell: four-function calculators

Total comment counts : 25

Summary

The article discusses the complexities involved in designing a simple calculator, highlighting the challenges in implementing seemingly straightforward arithmetic operations and user interface decisions. Here are the key points:

  1. Basic Design: A calculator typically includes digit keys, arithmetic operators, an equals sign, and a clear button. Implementing this involves managing input, an accumulator, and operator selection.

  2. User Interaction Challenges:

    • Post-Equals Input: If a user presses a digit immediately after an equals sign, the calculator should clear the previous result, which requires an additional flag for handling input reset.
    • Operator Precedence: Without handling operator precedence, operations might not execute as expected, leading to incorrect results. An implicit equals sign functionality needs to be added for correct sequencing of operations.
    • Operator Correction: When a user enters consecutive operators, the system must interpret whether it’s a correction or a different operation, which adds complexity in managing operator input.
  3. Ambiguities in Operations:

    • Negative Numbers: There’s ambiguity in whether an operation like “2 × -3” should be interpreted as multiplication by a negative number or subtraction after multiplication. Different calculators handle this differently, often with special cases or dedicated sign buttons.
    • Unary Operations: Handling unary minus signs introduces further confusion, especially in how operations affect the entire expression or just the immediate operand.
  4. Additional Features and Their Implications:

    • Constant Feature: Some calculators have a “K-constant” feature allowing for repetitive operations with one constant operand, useful for tasks like calculating interest or surcharges. This feature complicates the simple model by requiring the retention of the last operation and one operand.
  5. Conclusion: Designing a calculator, even a basic one, involves numerous edge cases and special scenarios that must be accounted for to ensure usability and correct functionality, challenging the initial assumption that such a task could be completed quickly.

The article underscores that what appears to be a simple device has layers of complexity beneath its surface, making calculator design an intricate part of software engineering.

Top 1 Comment Summary

The article discusses an issue with the iOS calculator’s functionality before its recent redesign. Previously, the calculator did not display the full expression being calculated, leading to confusion because:

  • Appearance: It looked like a basic four-function calculator, which typically performs operations in the order entered (left to right).
  • Actual Behavior: It was actually evaluating expressions according to standard mathematical precedence rules (PEMDAS/BODMAS), but this wasn’t visually indicated.

The author admits to preferring this more complex behavior but found it non-intuitive without visual cues, resulting in incorrect calculations. The redesign has since addressed this by showing the full expression, making the calculator’s operations clearer. The author wonders if this was a common issue or just their own misunderstanding.

Top 2 Comment Summary

The author reflects on having forgotten how to use basic four-function calculators due to a switch to Reverse Polish Notation (RPN) calculators over the last decade. They mention using Emacs Calc on computers and RealCalc on Android devices as their preferred RPN tools. Additionally, for simpler tasks like scaling recipes, the author uses a slide rule, advocating for its utility in understanding proportions.

7. Show HN: Cs16.css – CSS library based on Counter Strike 1.6 UI

Total comment counts : 27

Summary

The article discusses a CSS library named cs16.css which replicates the user interface style of Counter Strike 1.6. To use this library, one simply needs to add a specific line of code into the <head> section of an HTML document. The library presumably includes styles for elements like dark and light mode options and tabbed content.

Top 1 Comment Summary

The article lists several CSS frameworks designed to emulate the user interface styles of various nostalgic computing and gaming systems:

  • Windows 95 - A CSS framework to mimic the look of Windows 95.
  • Windows XP - CSS to replicate the appearance of Windows XP.
  • Counter-Strike 1.6 - A style sheet for the look of the classic game Counter-Strike 1.6.
  • Edward Tufte - CSS inspired by the visual style of Edward Tufte’s work.
  • Windows 98 - CSS for Windows 98 aesthetic.
  • Windows 7 - CSS framework for the Windows 7 interface.
  • PlayStation One - CSS to give elements the look of the original PlayStation UI.
  • Nintendo Entertainment System (NES) - CSS for NES game-like interfaces.
  • Apple System - CSS to emulate Apple’s classic system UI.
  • The Sims - CSS framework designed to mimic the UI of The Sims game.

The author expresses enthusiasm about expanding their collection of these nostalgic CSS frameworks.

Top 2 Comment Summary

The article discusses the user interface (UI) design in video games, comparing the immersive, industrial-style UI of “Half Life 2” with more stylized, less realistic UIs found in other games. The author appreciates how the UI in “Half Life 2” enhances the gaming experience by feeling real and easy to navigate, despite appearing dated. In contrast, they find overly decorative UIs in other games to be less user-friendly and less immersive, describing them as a chore to use.

8. The State of Vim

Total comment counts : 15

Summary

The article discusses the impact of Bram Moolenaar’s death in 2023 on the Vim project, a popular text editor, and the subsequent steps taken by the community to ensure its continuity. Christian Brabandt, who has been involved with Vim since 2006, took on a more significant role after Moolenaar’s passing. Here are the key points:

  • Leadership Transition: Following Moolenaar’s death, there was a need to reorganize the project’s management. Brabandt, along with other contributors like Ken Takata, stepped up to maintain Vim, dealing with issues like GitHub organization settings and access to Moolenaar’s account for necessary changes.

  • New Maintainers: After the retirement of long-term contributor Charles Campbell, the team expanded to include new maintainers like Yegappan Lakshmanan, Dominique Pellé, Doug Kearns, and others.

  • Technical and Community Management: Brabandt highlighted that maintaining Vim involves more than just coding; it includes managing the website, FTP server, security disclosures, and community interactions on platforms like Reddit and Stack Exchange.

  • Website and Infrastructure: The Vim website, which was outdated and running on PHP 5, was updated to PHP 8 with help from Mark Schöchlin. Efforts are underway to redesign the site to be more user-friendly while maintaining consistency for long-time users. The FTP server was retired, and email forwarding was updated.

  • Charity and Funding: Vim remains charityware, supporting ICCF Holland, the charity founded by Moolenaar. Donations increased post-Moolenaar’s death, raising about €90,000 in 2023. All donations are directed to ICCF, with no changes planned in this arrangement despite the lack of sponsorship for maintainers.

  • Future of Vim: Brabandt and the new team aim to keep Vim evolving while respecting its roots and community, ensuring that it remains a relevant and supported tool for users.

Top 1 Comment Summary

The article discusses how VIM (Vi IMproved), a text editor, has managed a smooth transition under new leadership, even though the timing of the leadership change was not ideal. It suggests that this successful transition could serve as an example for other projects led by a Benevolent Dictator for Life (BDFL), encouraging them to plan for leadership succession early on. The article includes a link to a Wikipedia page about the BDFL model of project governance.

Top 2 Comment Summary

The article discusses strategies to increase the adoption of Vim9 script, Vim’s new scripting language:

  1. Highlighting its Advantages: It emphasizes that Vim9 script is significantly better than the older Vimscript, making it more user-friendly.

  2. Comparison with Lua: It argues that Vim9 script is better suited for writing text editor plugins compared to Lua, despite Lua being a more general-purpose scripting language.

The article acknowledges that even with these advantages, there might still be reluctance among users and developers to learn a new language when they are already familiar with Lua. However, it stresses the importance of spreading awareness about these benefits to encourage adoption.

9. I wrote my own “proper” programming language (2020)

Total comment counts : 8

Summary

The article discusses the process of building a compiler for Bolt, a Java-style concurrent object-oriented programming language, as part of a series of educational posts. Here are the key points:

  1. Introduction to Bolt: The author, who had no prior experience in building compilers or using OCaml/C++, embarked on creating Bolt as part of their academic project. The series aims to go beyond simple toy languages to explore features of a more complex, real-world language.

  2. Why Build a Programming Language?: The motivation for designing a language includes:

    • To understand and manipulate the syntax and semantics of programming languages deeply.
    • To gain insights into how different languages handle common programming paradigms.
    • To develop better mental models of programming concepts, making it easier to learn new languages.
  3. Compiler Overview: The article uses an analogy of a telegraph operator to explain compiler stages:

    • Lexing: Breaking down the input into tokens or words.
    • Parsing: Understanding the structure or syntax of these tokens.
    • Type-Checking: Ensuring the program makes logical sense by checking types and grammatical rules.
    • Translation (Compilation): Converting the parsed and checked code into machine code or another intermediate form.
  4. Additional Compiler Functions: Beyond the basic stages, compilers often include:

    • Desugaring or Lowering: Simplifying complex language constructs into simpler, more manageable forms before final compilation.
  5. Learning Benefits: Creating a programming language provides a unique perspective on programming, enhancing one’s ability to quickly adapt to new languages by understanding their underlying principles and constructs.

The author plans to delve deeper into these topics in subsequent posts, encouraging readers to follow along to gain a comprehensive understanding of compiler design and language creation.

Top 1 Comment Summary

The article discusses a method for developing compilers that emphasizes simplicity and self-hosting over reliance on complex libraries like LLVM. Here are the key points:

  1. Compiler Design: Instead of using extensive libraries for the front and backends which can slow down the compilation process, the author suggests manually writing parsers and targeting simpler languages or existing languages (like JavaScript or C) for code generation. This approach avoids the overhead of tools like LLVM.

  2. Parsing: The author explains that with an understanding of parsing principles, one can easily write efficient parsers in any programming language by creating simple character classification tables.

  3. Development Process: Start with a minimal compiler capable of compiling a “Hello World” program. Gradually add features until the compiler can compile itself (self-hosting). This process usually involves writing less than 10,000 lines of code.

  4. Self-Hosting and Iteration: Once self-hosting is achieved, the compiler can be rewritten in the language it compiles, allowing for refinement based on insights gained during development. At this stage, considering more complex targets like LLVM or custom assembly might be justified.

  5. Philosophy: The approach keeps the compiler small and manageable, avoiding unnecessary features like concurrency at the start since small programs don’t require them. This method allows for a deep understanding of the language and compiler, enabling the creation of specialized features like data race protection in future iterations.

  6. Evolution: After mastering basic compiler construction, one can design a new language with desired advanced features, starting from scratch with those features integrated from the beginning.

The article advocates for a bootstrap approach in compiler construction, focusing on simplicity, understanding, and gradual enhancement.

Top 2 Comment Summary

The article expresses enthusiasm for the trend of developing new programming languages, highlighting the vast potential for innovation in this area. However, the author notes a desire for more universal tooling that supports multiple languages, such as debuggers, sophisticated diff tools that understand parse trees, and advanced autocompletion features like Intellisense, to facilitate the development and adoption of these new languages.

10. Disabling Zen 5’s Op Cache and Exploring Its Clustered Decoder

Total comment counts : 4

Summary

The article discusses the architectural details of AMD’s Zen 5 CPU, particularly focusing on its frontend setup which features a pair of fetch and decode clusters, similar to the older Steamroller architecture but with significant differences. Here are the key points:

  1. Frontend Setup: Zen 5 has a dual fetch and decode cluster setup, allowing for up to eight instructions per cycle with two threads or four with one. This is somewhat reminiscent of Steamroller, but Zen 5 primarily uses a 6K entry op cache to feed instructions, reducing reliance on the decoders which are only activated on op cache misses.

  2. Testing Without Op Cache: The author tested Zen 5 with its op cache disabled to evaluate the decoder’s performance, using the AMD Ryzen 9 9900X. This setup was compared to older AMD architectures like Steamroller and Excavator, highlighting differences in performance and instruction handling.

  3. Performance Comparison: With the op cache off, Zen 5’s performance drops significantly, by about 20.3% in integer and 16.8% in floating-point tasks on SPEC CPU2017 benchmarks, a heavier penalty compared to its predecessor, Zen 4.

  4. Instruction Fetching and Decoding: Zen 5 can handle 16 bytes per thread cycle when fetching, but only achieves full potential with both threads active due to dual fetch paths. This setup provides increased memory level parallelism when both threads are in use.

  5. Comparison with Steamroller and Excavator: While sharing some architectural similarities, Zen 5 differs by having an op cache and separate fetch paths for each decode cluster, which significantly improves performance when fetching from higher levels of memory.

  6. Insights and Observations: The article notes that while older architectures like Excavator had some advantages in specific scenarios, Zen 5 generally outperforms due to its modern design elements like the op cache and improved memory handling.

Overall, the article provides an in-depth look at how Zen 5’s frontend operates, its capabilities, and how it compares to AMD’s past CPU architectures in terms of instruction handling and performance when stripped of its usual optimizations like the op cache.

Top 1 Comment Summary

The article discusses the work of Emery Berger on performance analysis in software, specifically focusing on a method called Causal Profiling:

  • Causal Profiling: This technique, developed by Emery Berger, aims to identify which parts of the code have the most significant impact on the overall performance by introducing controlled slowdowns and observing the effects. This approach is detailed in his paper “Coz: Finding Code that Counts with Causal Profiling.”

  • Resources:

    • The paper can be found on arXiv at the provided link.
    • A related talk by Emery Berger titled “Performance (Really) Matters” is available on YouTube, offering further insights into why performance optimization is crucial and how causal profiling can be applied.

This method and Berger’s research would particularly interest individuals involved in software optimization and performance tuning.

Top 2 Comment Summary

The article discusses presentations from AMD, Intel, and Qualcomm at Hot Chips 2024, where all showcased their high-performance cores capable of handling up to eight micro-ops per cycle. Among these, AMD’s Zen 5 core was unique in that it couldn’t allocate eight decode slots to a single thread. The article also mentions that when including Apple and ARM, Zen 5 remains the only core with this limitation. It speculates on the future of AMD’s architecture, questioning if Zen 6 will differ from its predecessor, noting that Intel is rapidly iterating on their designs while AMD’s Zen 6 is still some time away.