• wjs018@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 days ago

      The theory that the lead maintainer had (he is an actual software developer, I just dabble), is that it might be a type of reinforcement learning:

      • Get your LLM to create what it thinks are valid bug reports/issues
      • Monitor the outcome of those issues (closed immediately, discussion, eventual pull request)
      • Use those outcomes to assign how “good” or “bad” that generated issue was
      • Use that scoring as a way to feed back into the model to influence it to create more “good” issues

      If this is what’s happening, then it’s essentially offloading your LLM’s reinforcement learning scoring to open source maintainers.