jethronethro 3 hours ago

This is why you test code or a script before running it for real. Live and learn, I guess ...

qwertox 15 hours ago

Rule #1: Always put deletions behind a flag which is disabled for the first couple of test runs.

  • turtleyacht 15 hours ago

    It was truncating filenames, so /pics/1003-46.png overwrote /pics/1003-45.png because both were renamed /pics/1003-.png, or something like that.

    • qwertox 15 hours ago

      Truncating file names for the target. Then it proceeded to delete the source file. "Successfully deleted local file: ..."

      I mean, look at the printout. It shows that it created the remote file with the truncated filename, then deletes the local file with the correct filename.

      • turtleyacht 15 hours ago

        Oh, I see. Having a flag to skip deletion during test runs is a good rule then.

rsynnott 7 hours ago

In which Roko's Basilisk fires a warning shot.

victorbjorklund 14 hours ago

Who runs such an AI generated script without checking the code first?

  • qwertox 13 hours ago

    To be fair, the code Gemini outputs in AI Studio is so extremely verbose that it is almost impossible to read through it.

    It turns 10 lines of code which is perfectly fine to reason about into 100 lines of unreadable code full of comments and exception handling.

    • rsynnott 7 hours ago

      > To be fair, the code Gemini outputs in AI Studio is so extremely verbose that it is almost impossible to read through it.

      In which case, it should simply be considered unusable. Like, the sensible response to "tool is so inadequate that there is no reasonable way to make sure its output is safe" is to _not use that tool_.

    • weatherlite 10 hours ago

      Right so lets just always run the code as is ?

      • qwertox 3 hours ago

        No. Not at all. I've settled to discussing my code with Gemini. That way it works very well. I explicitly say "Comment on my code and discuss it" or "Let's discuss code for a script doing this and that. Generate me an outline and let's see where this leads. Don't put comments in the code, nor exception handling, we're just discussing it".

        Or you create elaborate System Instructions, since it adheres to them pretty well.

        But out-of-the-box, Gemini's coding abilities are unusable due to the verbosity.

        I've even gone so far to tell it that it must understand that I am just a human and have limited bandwidth in my brain, so it should write code which is easy to reason about, that this is more important than having it handle every possible exception or adding multiline comments.

rvz 15 hours ago

Recently there was a story about an updater causing a $8,000 bill because there was a lack of basic automated tests to catch the issue. [0]

The big lesson here is that you should actually test the code you write and also write automated tests to check any code generated by an LLM that the code is correct in what it does.

It is also useless to ask another AI to check for mistakes created by another LLM. As you can see in the post, both of them failed to catch the issue.

This why I don't take this hype around 'vibe-coding' seriously since not only it isn't software engineering, it promotes low quality and carelessness over basic testing and dismisses in checking that the software / script works as expected.

Turning $70 problems found in development into $700,000+ costs in production.

There are no more excuses in not adding tests.

[0] https://news.ycombinator.com/item?id=43829006