Author Topic: Programmers hate this one trick: Chatgpt  (Read 386 times)

Have you guys successfully got chatgpt to spit out working scripts? I got it to spit out a tampermonkey script that actually functions. It took a few tries but it functions. I can tell it's not great though since I had to ask it one step at a time to fix each issue.

The script just replaces broken images on a map with a new image.

Code: [Select]
// ==UserScript==
// @name         HearthWorld Thingwall Marker Fix (Zoom Safe)
// @namespace    https://map.hearthworld.com/
// @version      1.1
// @description  Replace the broken thingwall marker icon on the HearthWorld map (zoom-safe)
// @match        https://map.hearthworld.com/*
// @grant        none
// @run-at       document-idle
// ==/UserScript==

(function () {
    'use strict';

    const BROKEN_ICON_PATH = '/icons/thingwall.png';
    const REPLACEMENT_ICON = 'https://files.catbox.moe/wqb370.png';

    function fixThingwallMarker(img) {
        if (!img.classList.contains('leaflet-marker-icon')) return;

        const src = img.getAttribute('src');
        if (!src || !src.includes(BROKEN_ICON_PATH)) return;

        // Prevent infinite loops
        if (img.dataset.thingwallFixed === 'true') return;

        img.dataset.thingwallFixed = 'true';
        img.src = REPLACEMENT_ICON;
    }

    function watchMarker(img) {
        if (img.dataset.thingwallWatched) return;
        img.dataset.thingwallWatched = 'true';

        // Fix immediately (important for zoom rebuilds)
        fixThingwallMarker(img);

        // Fix if image fails to load
        img.addEventListener('error', () => {
            img.dataset.thingwallFixed = 'false';
            fixThingwallMarker(img);
        });

        // Fix if Leaflet resets the src during zoom
        const attrObserver = new MutationObserver(mutations => {
            for (const m of mutations) {
                if (m.attributeName === 'src') {
                    img.dataset.thingwallFixed = 'false';
                    fixThingwallMarker(img);
                }
            }
        });

        attrObserver.observe(img, {
            attributes: true,
            attributeFilter: ['src']
        });
    }

    // Handle existing markers
    document
        .querySelectorAll('img.leaflet-marker-icon')
        .forEach(watchMarker);

    // Watch for new markers added during pan/zoom
    const domObserver = new MutationObserver(mutations => {
        for (const mutation of mutations) {
            for (const node of mutation.addedNodes) {
                if (node.nodeType !== 1) continue;

                if (node.tagName === 'IMG') {
                    watchMarker(node);
                } else {
                    node
                        .querySelectorAll?.('img.leaflet-marker-icon')
                        .forEach(watchMarker);
                }
            }
        }
    });

    domObserver.observe(document.body, {
        childList: true,
        subtree: true
    });
})();
Anybody know anything about programming? Does this happen to be an absolutely terrible way to do this? Have you guys got chatgpt to write a simple program for you?
« Last Edit: December 24, 2025, 02:59:38 AM by Soukuw »

plenty of times. ive created custom GPTs that build python and JSON scripts to help automate data entry at my company. i would never trust it to make a full fleshed program but its pretty damn good at organizing numbers and categorizing information

whether it’s the most optimized thing or not is a swing in the dark but forget it if it works it works

plenty of times. ive created custom GPTs that build python and JSON scripts to help automate data entry at my company. i would never trust it to make a full fleshed program but its pretty damn good at organizing numbers and categorizing information

whether it’s the most optimized thing or not is a swing in the dark but forget it if it works it works
Yeah it's a bit funky, maybe if it was the paid version it wouldn't be but it frequently seems to forget things in between prompts

the most ive used it for is to generate a short 5-10 line bash script since poring through documentation to find the exact behavior i needed was annoying. i then proceeded to interrogate the ai about every part about every line to make sure i understood completely what was going on.

past that, no. heard too many stories about people being overly reliant on it so i dont want to give myself the chance to crutch on it too.

I use it as a fuzzy search honestly

there are a lot of things i want to look up that i cant accurately describe or syntax that i want to understand or name. it helps me do that so i can know what to look up.

also its very good at creating small throwaway scripts that i can quickly verify the function of so i can do boring stuff faster. like i don't need to know the super detailed specifics of bash or autohotkey to automate something quickly anymore

i make sure to read/sandbox the scripts before executing though so it doesn't go haywire

it's also really good a condensing framework migrations so i run a couple agents to translate x to y then i review and fix what it does. i literally saved months of time at work doing test migrations this way. the only downside is that copilot is insanely expensive but my work gives me an infinite budget for it so its ok
« Last Edit: December 25, 2025, 12:45:18 PM by Aide33 »

follow up: i really dont get why people hype AIs image/video generation stuff so much as it seems to be the least useful or nice feature it has.

like its so stupid to use it to automate CREATIVE pursuits instead of using it to automate BORING work 


error drift is serious. it's usually 95% accurate, and if you continue to run it over and over again the errors compound and it becomes unmanageable.

ai coding is best used in small chunks, where the complexity is managed to one task at a time. projects of massive complexity tend to fail quickly. it ends up generating code that theoretically should work but somehow doesn't. it'll make you think you discovered some sort of million dollar idea and then you'll realize the code has no soul and it's a different kind of existential dread.

ai is seriously effective at brainstorming and innovative thinking. it can provide quick estimates on pros and cons of coding solutions and point out bottlenecks well. sounds goofy but its all about asking the right questions, and then it becomes incredibly helpful.



beware! use it too much and you'll legitimately forget how to code
« Last Edit: December 25, 2025, 10:46:05 PM by PhantOS »

ai coding is best used in small chunks, where the complexity is managed to one task at a time. projects of massive complexity tend to fail quickly. it ends up generating code that theoretically should work but somehow doesn't. it'll make you think you discovered some sort of million dollar idea and then you'll realize the code has no soul and it's a different kind of existential dread.
I had to hammer this over and over and over with my coworkers because I think most people dont understand that at a fundamental level its just a black box of probability.

like, for example lets assume that its 95% right like you said. then you need to isolate the tasks it works on to one prompt at a time for it to stay at 95% chance of it being correct.

if you incorrectly assume it will correct itself, or act like a human, the 5% chance it's wrong compounds every prompt and soon you just have a pile of slop.

its simplest to isolate what tasks are "solved" or deterministic and never let the ai do it. for example: if you let the agent try to find files it will infinitely loop around trying random stuff until it finds the file 80 prompts in. finding files is literally a solved and deterministic thing, so instead of letting the ai do it, make a script to use grep and then calls the AI to act on the files to do something that isn't a solved problem.

this way you remove all the solved problems where the ai could accidentally hit the 5% where it's wrong and spend time doing useless stuff and then it actually does stuff thats useful.

it's all about managing the probability
« Last Edit: December 26, 2025, 02:56:31 PM by Aide33 »

something under-discussed about AI is that it's dangerous to commit code that nobody has in their brain. like even if you could get AI to build a large codebase that works, when it breaks it will take much longer to debug/mitigate/fix

it really needs to be siloed to low stakes stuff like helper scripts, and summarization tasks

something under-discussed about AI is that it's dangerous to commit code that nobody has in their brain. like even if you could get AI to build a large codebase that works, when it breaks it will take much longer to debug/mitigate/fix

it really needs to be siloed to low stakes stuff like helper scripts, and summarization tasks
yeah its absolutely stupid to make it the majority of your code base. its an accident waiting to happen

it's the equivalent of getting a bunch of stuffty contractors to build your house and then killing them afterwards. don't be surprised when you can't find what wires are connected to what or you knock down a load bearing wall
« Last Edit: December 27, 2025, 12:54:11 PM by Aide33 »

chat gpt is certainly a helpful tool for coders because it saves time, but it's a big mistake to totally rely on it to code for you if you don't know how to proof-read the code and assure it's functionality.


Has anyone tried doing an AI generated Blockland addon?