• skarn@discuss.tchncs.de
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    17 hours ago

    But I mean why? Used in this way, AI systems are just another static analysis tool.

    Sure, a computationally inefficient one, but if you can get the signal/noise region high enough, anything that helps you find bugs seems fair game to me.

    One has to review their work, and take any fix offered by the slopmachine with a lot of care, of course.

    And Anthropic is a bad company, but we are talking about detecting security vulnerabilities in Firefox by wasting Anthropic money. That seems like win-win.

    The only downside (and I admit it’s big) is that Anthropic gets some publicity out of this.

    • Dæmon S.@calckey.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      4
      ·
      16 hours ago

      @skarn@discuss.tchncs.de @Solumbran@lemmy.world @linux@lemmy.ml

      Have you considered the possibility that, by “finding a bug” and possibly “suggesting” a “patch”, the LLM could be smuggling another bug unbeknownst to the vibe coder(s) and/or smuggling a technical debt?

      I say this as someone who’ve been coding since my 8s (now I’m 30), someone who hasn’t the tribalistic anti-AI sentiment (I even use LLMs sometimes, particularly the non-Western ones such as Deepseek and Qwen) but understands LLMs enough to know how the (current, state-of-the-art) stochastic parrots shouldn’t be trusted the source code of any slightly serious project, especially a full browser that Firefox is. Chances are devs are going to blindly trust and obediently stage-and-commit whatever the parroting machine spits out, and this can end up really messy. Given the ongoing pivot to AI from Mozilla, I doubt they’re worried about the consequences of vibe coding, though.