Off Topic > Off Topic
Software engineering trends that annoy you
Tyler66:
--- Quote from: Aide33 on April 26, 2024, 08:27:41 PM ---I know you are not a web developer but I'll be blunt:
300mb of memory is a lot, sure, but guess what: I don't care.
The average smartphone has 4GB of ram. This means if the operating system takes up *half* of the RAM at any given moment, the user can run 6 apps simultaneously.
And that's on the average phone! No one is running these apps all at once!
Like I said previously, the ease of use and rapid deployment Annoying Oranges being able to multitask 1304 apps locally, which no one does. I wouldn't be suprised if in the future we saw a native OS/browser sandbox environment so apps don't have to ship with a web browser.
--- End quote ---
You know well that smartphones are not the devices we are talking about here. Nobody is going to gripe about devices which are by design not intended for multitasking. It's entirely apples and oranges.
On desktop will be running multiple applications (not including the 2 upwards of 3 digits worth of daemons/services that will be running in the background) all at once, most of which will not have the luxury of writing application state to disk when it's not in use. It simply does not work that way.
Having your webapp or website consume 100 MB to 2 GB of memory just because "everyone has at least x GB of memory" is absolutely inexcusable and is an awful mindset for any developer to have. It's also inconsiderate to the end-user because they will want to use other applications without having to throw down $250+ for a new set of 32 GB DIMMs. But as you said; you do not care.
--- Quote from: Aide33 on April 26, 2024, 08:27:41 PM ---This is an outdated stereotype from a time where ECMA standards didn't exist and the entire industry wasn't behind React. Web standards are very well documented and set in stone, a lot of the really annoying stuff is deprecated and Javascript is no longer the only language people develop with on the web. (I'll expand on this in my next point)Your idea that JS interpreters and HTML+CSS rendering is slow comes from another era, this is no longer the case. Web browsers have literally become one of the most optimized pieces of software ever made. I remember looking at the stats for how quickly JS has become in every engine available and being blown away by how far we've come. Your statement that "we should be looking for new replacements" is fairly ignorant because people have been pouring blood sweat and tears in optimizing the web. And guess what? You can run bytecode! You can run binaries!
--- End quote ---
And the fruits of the 3 have manifested in busy CPUs and monopolized memory space.
JS has been JITted to hell and back and yet still lags significantly behind other scripting languages like Lua and Squirrel. Sure, you can throw every last SIMD instruction, data-oriented design technique, -Ox flag, and inlined function at the problem for both the parsing and bytecode evaluation, but the end results still speak for themselves.
I still maintain that HTML+CSS are an awful bunch. Ever tried writing a parser for HTML? If not ask someone who has and ask if they had fun doing it. Better yet, interview people who have tried to parse it with regular expressions. DOMs in general are an awful way to lay out information on a screen. It worked well when simple static, linear pages ruled supreme, but is horribly inefficient for the dynamism of the modern web. CSS is just a giant band-aid to the problem.
And just because a bunch of developers put their blood sweat and tears does not exclude it from the consideration of replacement.
OpenGL was the arguably world-wide standard API for 2D and 3D rendering for years, and many man-hours were put into extending it and optimizing drivers for it, and worked well for what was required from it. Nonetheless effort was sought into a replacement that better matched modern hardware than the fixed-function pipelines it was originally intended for 30 years ago, which is how we got the infinitely better Vulkan API. You can say the exact same about the X Windowing System and Wayland too.
--- Quote from: Aide33 on April 26, 2024, 08:27:41 PM ---Every major browser supports WebAssembly! You can precompile any language and run it in a sandbox on the client-side just like JS.
--- End quote ---
WASM's IR is a step forward but still lags behind Oracle's Java bytecode and Microsoft's CIL from the few benchmarks I've seen. I don't hold it against that though, as it's still young and still has a lot more time to get better.
--- Quote from: Aide33 on April 26, 2024, 08:27:41 PM ---The reason apps are slow, is because companies prioritize profits and quick delivery over making good things. This is not unique to web development. It's just a lot more visible because people use the web a lot.
--- End quote ---
This was what I was trying to get at. Clearly there's a gap between the tools used and how they're actually being used, given the monument of issues plaguing web development. I doubt neither the developers or the companies are going to budge, so changing the tools to fit the demands of the modern web is the most clearest option, at least in my eyes.
I know I'm probably making enemies with all the web developers reading this. Sorry, I'm from the other side of town.
Aide33:
--- Quote from: Tyler66 on May 01, 2024, 07:16:12 PM ---You know well that smartphones are not the devices we are talking about here. Nobody is going to gripe about devices which are by design not intended for multitasking. It's entirely apples and oranges.
On desktop will be running multiple applications (not including the 2 upwards of 3 digits worth of daemons/services that will be running in the background) all at once, most of which will not have the luxury of writing application state to disk when it's not in use. It simply does not work that way.
Having your webapp or website consume 100 MB to 2 GB of memory just because "everyone has at least x GB of memory" is absolutely inexcusable and is an awful mindset for any developer to have. It's also inconsiderate to the end-user because they will want to use other applications without having to throw down $250+ for a new set of 32 GB DIMMs. But as you said; you do not care.
And the fruits of the 3 have manifested in busy CPUs and monopolized memory space.
--- End quote ---
My point was: if the average phone can run many of these applications at once, then the average computer should be even better at it. Unless you are implying that desktop computers are somehow on average worse??
Remember, we are talking about a framework that is made for creating interactive UIs: most users cannot context switch between like 8 different graphical apps at once. I'll be running 4 MAYBE 5 different electron apps at once. Hell, even in worst case scenarios (running 10 vscode instances because ??) I've been able to use dozens of instances of these programs with no problem.
I don't think a few hundred megabytes is that big of a deal anymore when the average image on the internet is like 50mb.
--- Quote from: Tyler66 on May 01, 2024, 07:16:12 PM ---JS has been JITted to hell and back and yet still lags significantly behind other scripting languages like Lua and Squirrel. Sure, you can throw every last SIMD instruction, data-oriented design technique, -Ox flag, and inlined function at the problem for both the parsing and bytecode evaluation, but the end results still speak for themselves.
--- End quote ---
Can you give me the source for this claim? I cannot find anything saying JS is significantly behind every interpreted language.
Almost every single recent article and benchmark proves that the V8 engine behind Chrome and NodeJS makes one of the fastest interpreted language (and I haven't even mentioned the Bun runtime). Your knowledge on the subject seems to be lagging by like 5 years. Lua and LuaJIT have similar or worse performance.
This is my criticism of people who still complain about things like this: JS has already come out of it's dark age and has emerged as THE web standard but people still harp about stuff that has been fixed ages ago. The V8 engine for JS literally has teams of people at Google working around the clock to make it the fastest interpreted language because they have the incentive to make their browser the fastest. If what you said was true, what would've prevented another company from making a browser that had a different scripting language and then swaying the W3C to change the standard because their stuff is faster?
And again, all these benchmarks are incredibly close. The people who obsess over them are microptimizing code. The average website doesn't need a 4% speed boost by using C or C++, the ones who do need it for graphics performance can just use WASM.
--- Quote ---WASM's IR is a step forward but still lags behind Oracle's Java bytecode and Microsoft's CIL from the few benchmarks I've seen. I don't hold it against that though, as it's still young and still has a lot more time to get better.
--- End quote ---
It's fairly new but it's becoming really good at an alarming rate because the potential upsides. The value proposition of sandboxed apps running on every platform ever is a really good incentive to work on it.
--- Quote ---This was what I was trying to get at. Clearly there's a gap between the tools used and how they're actually being used, given the monument of issues plaguing web development. I doubt neither the developers or the companies are going to budge, so changing the tools to fit the demands of the modern web is the most clearest option, at least in my eyes.
--- End quote ---
I know from your perspective this statement seems like it makes sense, but there are a lot of problems with it. This is like saying "The unreal engine editor is hard to use and it's really easy to make slow games on it, instead of improving the UI and patterns to fit what the devs need, lets replace it overnight with Unity and make all the devs port everything to it."
The real issue behind devs not optimizing stuff is because it is really easy to get into web development, which leads to an oversaturation of bad developers. It's because the tools for the job where really poorly made up until ~5-7 years ago and all the bad developers never updated their skills. For example, a lot of developers still reimplement their own "deep object cloning" algorithms or import one from libraries (ballooning their memory footprint) when the function to "deep object clone" has been a part of web APIs for years now. These kinds of development mistakes will not stop happening until the developers get better and the resources on the internet get updated. Javascript is unfortunately plagued by the fact that most of the content on the internet about it is wrong/outdated (or just lies), and that breeds an environment that creates bad developers.
Aide33:
--- Quote from: Conan on April 30, 2024, 06:17:59 PM ---i think the most offensive part of what you said was a website requiring 300mb of ram to run being acceptable or normal. i think that's unjustifiable outside of extremely rare and specific circumstances (eg running some full-fledged video game or youtube) and the fact you didn't think that spoke volumes to onlookers you're part of the problem with modern website performance, whether or not you want to be. i definitely recoiled at that number myself, but didn't deem it important enough to respond at the time.
--- End quote ---
I think you misunderstood my point in context. Let me clarify:
In the context of the conversation, I was talking about electron apps. Stand-alone single page webapps require a lot more resources than the average website. Apps that run standalone on your computer should obviously have way more leeway on how they operate and that should be taken into account. For example, it's not unheard of for Photoshop, Text editors, IDEs, etc to go above 300mb. They need to constantly cache things like documents, files, and web requests. In fact, the instances of terminal I have open right now are like 80-100mb of RAM! It sounds horrible for a text-based application, but it's the cost being able to scroll up the entire history since I opened it! RAM is meant to be used to make user experience snappy and useable (within reason), imagine if every time I scrolled up 100 lines vscode it had to call the HDD to open more of the file, that stuff would be slow as forget.
I wholeheartedly agree with you when saying a normal CRUD website (like a forum or a wiki) should never ever ever use more than 10mb of ram, 300mb is insane for that scenario.