- 90% of abstraction is dreadful. That means _your_ particular abstraction is almost certainly bad, and we don't want to learn it.
- There is a lack of respect for the history of programming. IMO it has caused the industry to be stuck in a perpetual cycle of half-baked rediscovery.
- Similarly, a type of "FAANG-syndrome" exists and allows sub-par ideas to take over mind share of the industry. Once a technology picks up enough momentum, it will snowball and we're stuck working with legacy trash almost immediately. Developers legitimately seem to believe each trend is good.
- Our industry's shared vocabulary is too weakly defined. Phrases like "The right tool for the job" are ubiquitous but essentially meaningless and used as a form of shorthand "I currently feel this is correct". If we had a real professional lexicon, the first thing juniors would learn would be to enumerate reasoning to a precise degree. IME most "senior" devs can barely do it.
- Dynamic languages are good, actually. IDE auto-completion and full-project renaming are the features that hit above their weight when using static type systems. IMO the remaining benefits of static types are within the same order of usefulness as the pros and cons of dynamic langs; You can argue about them on a case by case basis. This means static types aren't inherently better than dynamic languages (which is the popular opinion of the day), there's just a tooling issue right now. Therefore, dynamic langs will eventually make a comeback in popularity.
How do you enforce input and output compatibility across modular functions and projects without strong types? Is there a better standard way to specify the accepted input shape and expected output options?
Strong types are not orthogonal to dynamic types, but I'll assume you're referring to static typing instead. My opinion on this is static type systems are great and provide amazing fit when you can model an entire system (e.g. a compiler). I think they are overrated for typical business software subject to change over time. IMO this is because business data tends to outlive any particular code base, or compiler version.
The way I see it is there's a spectrum of ways of handling this, from type systems, validation code, documentation, integration tests (validating runtime behaviour), and static analysis tooling, but I don't agree that a static type system is the best (and is often only barely adequate) way of integrating modules. The optimal solution is going to vary based on each project's requirements: e.g., is the modular code going to be consumed via networked API or as a library, and is it internally controlled or third-party? How many teams are going to be working on it? How much care has been taken with backwards compatibility? If we break the interface of some random minor function every update, a static type system may help, then again if it's just for our team: who cares? I'm sure we've all seen updates make internal modifications that break runtime behaviour but don't alter data models or function signatures in a way that get's picked up by a compiler.
Even in the most extreme type systems, interfaces are eventually going to need documentation and examples to explain domain/business logic peculiarities. What if a library interface requires a list in sorted order? Is it better to leak a LibrarySortedList type into the caller codebase? The modularity starts to break down. The alternative is use a standard library generic List type, but you can't force the data to be sorted. To encode this type of info we need dependent types or similar. A different example would be a database connection library, every database supports different key/value pairs for connection strings. If the database library released a patch which deprecated support for some arbitrary connection string param, you wouldn't find out until someone tried to run the code. Static analysis tools may catch common things like connection strings, but IME there's always some custom "stringly" typed values in business applications, living in a DB schema written 10+ years ago.
We also have to consider that the majority of our data arrives serialised, from files or over the wire. It's necessarily constrained to generic wire protocols, which have lower fidelity than data parsed into a more featured type system. Given that this type of data is getting validated or rejected directly after deserialisation, how much extra value is derived from having the compiler reiterate your validation code? Non-zero for sure, but probably not as much as we like to think.
Our job is programming. I regularly see opinions that "90% of our job is not programming" and I don’t relate. Sure, our job is not just programming, but honestly if you don’t spend at least 50% of your work time programming, there’s something seriously wrong in your organization.
What is "our" job in this context? For example, when I was a junior over 50% of my work was literal programming, but as I get more senior I program less and less.
From my perspective "our" job is to deliver solutions to specific problems our end-users have. How people accomplish that varies a lot by their position/seniority.
To get into the weeds: When I first stared worst case scenario, I could waste my own time with poor or overengineered solutions. Now I can waste half a dozen programmer's time by not doing enough due-diligence and or planning.
It really depends on your role. I still think our job is ultimately to solve problems for users and/or other stakeholders, and that coding is just one of the tools we use.
But it's not always the best tool. Even as an IC, I've done far more with emails and meetings than just writing code without thinking, whether that's discussing UX implementation details with the designer or pushing back on some half-baked over-engineered solution from management or fighting some evil advertising and tracking scheme from marketing. We don't just write code but also gatekeep it with some level of professional judgment and ethical discretion. (Or at least should.)
I think the orgs that don't give you any autonomy or agency beyond "code monkey" are the more problematic ones. Not only will you burn out having to repeatedly implement things you have no input into, you'll also be the easiest to replace if all you do is code.
To some degree, yes, and I'd hope that other skilled professionals would take a similar approach.
Like if I wanted to add EV charging to my home, I'd hope the electrician would take the time to explain the different levels of charging, the breaker and wire upgrades needled, etc., find a suitable installation site around the house, etc., not just start hooking things up willy-nilly. Or that a HVAC person might talk about the pros and cons of heat pumps, or a doctor might discuss different treatment options, etc.
It's different from, say, being a line worker in a factory assembling the same part 10000x a day, or a fast food worker.
Sure, at some level we're all just "solving problems", but I'm arguing that a good dev thinks about the problem and possible solutions as a whole, and utilizes that agency to make the final output better, instead of just coding Jira tickets to spec and never saying a peep.
But that's my own bias as a predominantly frontend person working for small or medium sized companies where specialization isn't as extreme. Maybe at bigger companies and teams they already have many layers of UX/UI/design/management and don't need (or want or appreciate) a dev speaking up about any of those things. In my experience it's never that black-and-white and a lot of tickets and designs are ambiguous and require both professional judgment and some empathy to implement well.
Maybe that's why I prefer the generalities and of the frontend vs, say, hyper-optimizing a very specific database call.
100%. To an extent that's why I don't often freelance anymore as it's very easy to fall into a place where e.g. to build a booking app for a pet shop you need to become an expert in the field of veterinary.
Heh, it's funny, my partner is a vet tech and I keep thinking how interesting it would be to build a CRM for them and their patients (there is already an industry for that and some of the apps are actually decent).
I would say that working as a programmer in a corporate environment is a bit similar to being paid to be a novel writer by people who don't know how to read, but absolutely want to tell you how to do your job properly.
* State management is one of the most simple problems to solve in any application.
* WebSockets are superior to all revisions of HTTP except that HTTP is sessionless. Typically when developers argue against WebSockets it’s because they cannot program.
* Your software isn’t fast. Not ever, unless you have numbers for comparison.
* Things like jquery, React, Spring, Rails, and so forth do not exist to create superior software. They exist to help businesses hire unqualified developers.
* If you want to be a 10x developer just don’t repeat the mistakes other people commonly repeat. Programming mastery is not required and follows from your 10x solutions.
* I find it dreadfully hypocritical that people complain about AI in the interview process and simultaneously dismiss the idea of requiring licensing for professional software practice.
> WebSockets are superior to all revisions of HTTP except that HTTP is sessionless. Typically when developers argue against WebSockets it’s because they cannot program.
What did you mean by this? Were you suggesting that interactive web apps should maintain a persistent and stateful connection to the server and use that to send interaction events and receive the outputs back, like a video game would, rather than using stateless HTTP calls and cookies and such? Why is that superior?
> Were you suggesting that interactive web apps should maintain a persistent and stateful connection to the server and use that to send interaction events and receive the outputs back, like a video game would
That is how I design all my web facing applications now. The idea is that with WebSockets all messaging is fire and forget and that is independently true from both sides of the wire. That means everything is event oriented on each side separately and nothing waits on round trips, polling, or other complexity. In my largest application when I converted everything from HTTP to WebSocket messaging I gained an instant 8x performance boost and dramatically lower the architectural complexity of the application.
That's fascinating. You should do a writeup about it!
I had thought (perhaps incorrectly? it's not something I've spent a lot of time pondering) that a stateful connection like this is fragile in the real world compared to HTTP because it requires some sort of manual reconnection to the server on network changes (like if you're on a phone in a train or in a car), and that it would require both the server and app itself to be aware of what is dynamic realtime data, what is cacheable, what is stale, etc. Like kinda related to your other statement about state... doesn't this mean you're sharing and syncing state across both the server and the client?
Competitive video games operating over UDP are the closest everyday analogy I can think of, where the server handles most state (and is the ultimate source of truth) but the client optimistically approximates some stuff (like player movement and aiming), which usually works fine but can lead to rubber-banding issues and such if some packets are missed. But most gaming happens between one server and just a small handful of clients (maybe 100 or 200 at most?).
In a web app, would the same sort of setup lead to UI jank, like optimistic updates often flicking and back and forth, etc.? I suppose that's not inherent to either HTTP or websockets, though, just depends on how it's coded and how resilient it is to network glitches.
And how would this scale... you can't easily CDN a websockets app, right, or use serverless to handle writes? You'd have to have a persistent stateful backend of some sort?
One of the things I like about the usual way we do HTTP is that it can batch a bunch of updates into a single call and send that as one single atomic request that succeeds or fails in its entirety (vs an ongoing stream), and it doesn't really matter if that request was one second or one minute after the previous one as long as the server still knows the session it was a part of. Like on both the client and the server, the state could be "eventually" consistent as opposed to requiring a stable, real-time sync between the two?
Not disagreeing with you per se (hope it didn't sound that way), just thinking out loud and wondering how the details play out in a setup like this. I'm definitely intrigued!
It is the same level of fragility compared to HTTP because most HTTP connections now are stateless keep-alive connections. The solution to this fragility is to open a new connection after the prior connection breaks. You will know when the WebSocket connection breaks in both the browser and Node because there is an event exactly for that. That new connection request can occur via timed intervals or as needed at next message, depending upon concerns of directionality and urgency.
Transmission is not a function of state. Your application gains greater durability when those two qualities become fully independent of each other.
Transmission jank is a concern for web servers that come down and then back up with restoration of many concurrent connections. The solution to that is stagger connection establishment in batches and intervals. This is not a normal operating concern though, because how often does your web server crash on you in production if you have 10,000 or more active connections.
As for CDNs leave static asset requests to HTTP. For performance these files should be consolidated to the fewest number of requests balanced against initial rendering concerns in the browser. This should also typically be limited to initial page request.
To solve for that you can serve WebSockets on the same port as HTTP. Most web server application's don't know how to configure that, but I know it can be done because I doing it right now in my own applications.
The more tools between raw code and a running solution, the more fragile it is.
This issue is particularly prevalent in the Javascript world, where it isn't uncommon that when you hit build/run, half a dozen process need to occur. This is partly why I wish they'd just put native TypeScript in the browser, simplify the build pipeline by removing several steps (and, yes, TS would evolve slower/more conservatively which I also consider a positive).
WebAssembly is the apex of this issue. Super fragile to build and impossible to debug. It is what I call "prayer based development," because you "pray" it works or troubleshooting becomes a nightmare.
I have realized this morning, when I was riding in the bus, that my habit of creating a new side project every 6 months, working on it for maybe 2-3 months of relatively high productivity, and suddenly losing interests afterwards (usually not finished, or not polished), probably means that programming is an addiction to me.
No one is using my project, and I'm not exactly learning a lot from them, except in the first half of each one because whatever afterwards is just polishing (e.g. I did an interpreter project on a Python subset a year ago, and TBH the whole concept was pretty straightforward once my brain got it in the first few weeks). It's more of an obsession and that probably explains why I got burned out after 2-3 months and COMPLETELY lost interests in it. Looking at my GitHub commit history, it is always 2-3 months of almost daily commits compensated by 3 months of absolutely non-activity.
I don't think this is the right path for me if I want to leverage my side projects to get a job in low-level programming. Either I figure out how to drill deeper into each of my projects, or I need to figure out how to remove the burnout every 2-3 months. If the market is good I'd go straight to apply for system programming jobs but right now it's even tough to keep my own job.
So this is my unpopular opinions about side project programming -- if you are like me, maybe it's time to rethink the strategy. We only have one life, and I'm already 42. Gosh! Maybe I (we) should just find another hobby.
I wouldn't say they're "good", but it's cool that they've managed to stay somewhat open at all.
In an alternate timeline, Microsoft might've won out with single vendor .NET everywhere. The DX would be better but everything would be way more closed.
I think openness is a must. Alternatives that were proprietary largely died out because they were proprietary. They can't survive transitions into newer platforms, like how Flash failed to make it into mobile
I don't think that's necessarily true, though. Even back in the pre .NET days, when Microsoft pretty much did have a monopoly on PC desktops, Visual Studio (not VSCode, but its full-fledged big brother) had pretty awesome DX.
And having a standard UI layer (like WinForms) made GUI development way way better for both the developer (who could drag and drop shapes and easily align frames and tabs and buttons and such instead of having to try to wrangle CSS) and the average user could have standard UI looks & feels across many different apps.
The openness of the Web led to its popularity (and my career), but then every company ended up making their code style, IDE, APIs, UI layers, etc., leading to extreme fragmentation in both DX and UX that to this day is still a mess. Of course I still prefer this over a closed Microsoft monopoly (or an Apple one), but it's certainly made for a lot of unnecessary reinventions of the wheel.
It's a lot easier than you all make it out to be [0].
I'm not a 'programmer' like you all are, at best I can hack together code to get things done. I use git maybe once a year. I'm a biotech person that likes to hang out because y'all are mostly smart and the community is great here.
But man alive, this is not that difficult. Yes, it's hard to wrap your head around some nested dependancies. But it's a lot easier than any chain of protein/gene/neuron interactions. This stuff makes sense and you can edit it. My field can't really do that most of the time and it really doesn't make sense for decades (at best).
Like, I'm trying to follow along here and am mostly lost. But the few times I do know something about the code y'all are talking about, it's made out to be a lot more complicated than it needs to be.
I mean, yeah, keep that up though. Makes your bosses pay you more and lord knows those suits should be doing that and not spending the cash on rockets and shitty pick-ups.
But for real, y'all are making this out to be a lot harder than it is.
[0] this is supposed to an unpopular opinion, right?
Isn’t this just dunning kruger? Try and build something substantial for your gene science thing rather than some notebook script or CRUD app and see what complexity you run into.
When I read this, the first thing that came to mind was "until you have to work with time zones". I generally enjoy working in JS, but it's such a pain point for me that I can't help but not think about it whenever I see a datetime.
It doesn't get as much mindshare as the problems with typing and frameworks and fashion trends and such, but my god, I've never seen a major, popular language with such poor support for basic time zone manipulation and storage. It is really really bad and won't be fixed until the Temporal API is stable and widely available: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
It's not even anything fancy, just even simple things like calendars, date pickers, meeting schedulers, etc. Sure, you can do it all serverside, but we're talking about JS here, not any external database.
JS itself can't "keep" an original timezone in a Date() object. e.g.:
`new Date("2024-04-01T00:00:00Z").toString() `
Becomes your browser's time zone, even though the input was in Zulu/UTC. Also note that a time offset (Z or +08:00) is not the same as an IANA time zone string, and that's a one-way conversion. If you go from something like Los_Angeles to -7:00, you have no way to tell if the -7:00 is due to a daylight savings time or another locale that doesn't observe American DST. And JS doesn't store either piece of info.
If you have multiple users across different time zones trying to plan an event together, JS's date handling is super footgunny (is that a word?) and it's very easy for devs to make errors when converting datetimes back and forth from the server/DB to the app UI.
And because JS is so powerful today, there are many things that should be doable entirely clientside, but aren't easy right now. For example, relative time: wanting to make a scheduler that can search for +/- 7 days from a certain selected datetime. What is a "day"... is it 24 hours * 7? What do you do about daylight savings time boundaries? Or if you go from Feb 1 to Mar 1, is that "one month" or "28 days"?
These may seem like edge cases to you, but when it comes to datetime... really it's all edge cases :( You run into half-hour timezones, daylight savings time (which is a set of rules that vary over time even within the same country, not just a static offset per timezone and date range), cross-locale formatting, etc.
A lot of this is very doable with existing libs like Luxon, or datefns for simpler use cases, but they are fundamentally hacking around the weaknesses of the built-in Date() object with their own data structures and abstractions.
For me as a frontend dev working on ecommerce sites and dashboards, I've had to correct more datetime bugs from other JS devs (including some with decades of experience) than any other issue, including React state management. The tricky part is that a lot of the weaknesses are non-obvious, but it really is buggy and weak as heck, especially compared to many serverside languages. It's based on a very early implementation of Java's dates, which has since gotten a lot better, but JS's Date was still frozen in time.
Thankfully, most if not all of these issues will be solved with the Temporal API once it's stable... it's been like 10+ years under development, since the Moment days. Can't wait!
- Diversity doesn't really make your tech team better or worse. Prioritizing for diversity when you're hiring makes your team worse.
- Cache invalidation is relatively easy; it's choosing the right strategy that is hard.
- You hate SPAs because you conflate them with running javascript, and you hate running javascript because you conflate it with predatory advertising strategies. You should hate predatory advertising strategies.
- IP clauses in job contracts contribute to the formation of monopolies and monocultures; they should be collectively fought harder than they are.
- Teaching juniors should be < 10% of a developer's job. If you want an instructor, hire an instructor.
- 90% of abstraction is dreadful. That means _your_ particular abstraction is almost certainly bad, and we don't want to learn it.
- There is a lack of respect for the history of programming. IMO it has caused the industry to be stuck in a perpetual cycle of half-baked rediscovery.
- Similarly, a type of "FAANG-syndrome" exists and allows sub-par ideas to take over mind share of the industry. Once a technology picks up enough momentum, it will snowball and we're stuck working with legacy trash almost immediately. Developers legitimately seem to believe each trend is good.
- Our industry's shared vocabulary is too weakly defined. Phrases like "The right tool for the job" are ubiquitous but essentially meaningless and used as a form of shorthand "I currently feel this is correct". If we had a real professional lexicon, the first thing juniors would learn would be to enumerate reasoning to a precise degree. IME most "senior" devs can barely do it.
- Dynamic languages are good, actually. IDE auto-completion and full-project renaming are the features that hit above their weight when using static type systems. IMO the remaining benefits of static types are within the same order of usefulness as the pros and cons of dynamic langs; You can argue about them on a case by case basis. This means static types aren't inherently better than dynamic languages (which is the popular opinion of the day), there's just a tooling issue right now. Therefore, dynamic langs will eventually make a comeback in popularity.
How do you enforce input and output compatibility across modular functions and projects without strong types? Is there a better standard way to specify the accepted input shape and expected output options?
Strong types are not orthogonal to dynamic types, but I'll assume you're referring to static typing instead. My opinion on this is static type systems are great and provide amazing fit when you can model an entire system (e.g. a compiler). I think they are overrated for typical business software subject to change over time. IMO this is because business data tends to outlive any particular code base, or compiler version.
The way I see it is there's a spectrum of ways of handling this, from type systems, validation code, documentation, integration tests (validating runtime behaviour), and static analysis tooling, but I don't agree that a static type system is the best (and is often only barely adequate) way of integrating modules. The optimal solution is going to vary based on each project's requirements: e.g., is the modular code going to be consumed via networked API or as a library, and is it internally controlled or third-party? How many teams are going to be working on it? How much care has been taken with backwards compatibility? If we break the interface of some random minor function every update, a static type system may help, then again if it's just for our team: who cares? I'm sure we've all seen updates make internal modifications that break runtime behaviour but don't alter data models or function signatures in a way that get's picked up by a compiler.
Even in the most extreme type systems, interfaces are eventually going to need documentation and examples to explain domain/business logic peculiarities. What if a library interface requires a list in sorted order? Is it better to leak a LibrarySortedList type into the caller codebase? The modularity starts to break down. The alternative is use a standard library generic List type, but you can't force the data to be sorted. To encode this type of info we need dependent types or similar. A different example would be a database connection library, every database supports different key/value pairs for connection strings. If the database library released a patch which deprecated support for some arbitrary connection string param, you wouldn't find out until someone tried to run the code. Static analysis tools may catch common things like connection strings, but IME there's always some custom "stringly" typed values in business applications, living in a DB schema written 10+ years ago.
We also have to consider that the majority of our data arrives serialised, from files or over the wire. It's necessarily constrained to generic wire protocols, which have lower fidelity than data parsed into a more featured type system. Given that this type of data is getting validated or rejected directly after deserialisation, how much extra value is derived from having the compiler reiterate your validation code? Non-zero for sure, but probably not as much as we like to think.
Our job is programming. I regularly see opinions that "90% of our job is not programming" and I don’t relate. Sure, our job is not just programming, but honestly if you don’t spend at least 50% of your work time programming, there’s something seriously wrong in your organization.
What is "our" job in this context? For example, when I was a junior over 50% of my work was literal programming, but as I get more senior I program less and less.
From my perspective "our" job is to deliver solutions to specific problems our end-users have. How people accomplish that varies a lot by their position/seniority.
To get into the weeds: When I first stared worst case scenario, I could waste my own time with poor or overengineered solutions. Now I can waste half a dozen programmer's time by not doing enough due-diligence and or planning.
It really depends on your role. I still think our job is ultimately to solve problems for users and/or other stakeholders, and that coding is just one of the tools we use.
But it's not always the best tool. Even as an IC, I've done far more with emails and meetings than just writing code without thinking, whether that's discussing UX implementation details with the designer or pushing back on some half-baked over-engineered solution from management or fighting some evil advertising and tracking scheme from marketing. We don't just write code but also gatekeep it with some level of professional judgment and ethical discretion. (Or at least should.)
I think the orgs that don't give you any autonomy or agency beyond "code monkey" are the more problematic ones. Not only will you burn out having to repeatedly implement things you have no input into, you'll also be the easiest to replace if all you do is code.
> ultimately to solve problems for users and/or other stakeholders
Doesn't this describe every job on Earth?
To some degree, yes, and I'd hope that other skilled professionals would take a similar approach.
Like if I wanted to add EV charging to my home, I'd hope the electrician would take the time to explain the different levels of charging, the breaker and wire upgrades needled, etc., find a suitable installation site around the house, etc., not just start hooking things up willy-nilly. Or that a HVAC person might talk about the pros and cons of heat pumps, or a doctor might discuss different treatment options, etc.
It's different from, say, being a line worker in a factory assembling the same part 10000x a day, or a fast food worker.
Sure, at some level we're all just "solving problems", but I'm arguing that a good dev thinks about the problem and possible solutions as a whole, and utilizes that agency to make the final output better, instead of just coding Jira tickets to spec and never saying a peep.
But that's my own bias as a predominantly frontend person working for small or medium sized companies where specialization isn't as extreme. Maybe at bigger companies and teams they already have many layers of UX/UI/design/management and don't need (or want or appreciate) a dev speaking up about any of those things. In my experience it's never that black-and-white and a lot of tickets and designs are ambiguous and require both professional judgment and some empathy to implement well.
Maybe that's why I prefer the generalities and of the frontend vs, say, hyper-optimizing a very specific database call.
100%. To an extent that's why I don't often freelance anymore as it's very easy to fall into a place where e.g. to build a booking app for a pet shop you need to become an expert in the field of veterinary.
Heh, it's funny, my partner is a vet tech and I keep thinking how interesting it would be to build a CRM for them and their patients (there is already an industry for that and some of the apps are actually decent).
Not everyone is cranking out boilerplate web services/web apps
I would say that working as a programmer in a corporate environment is a bit similar to being paid to be a novel writer by people who don't know how to read, but absolutely want to tell you how to do your job properly.
This is true, Being in corporate kills your creativity
Is this an unpopular opinion?
* State management is one of the most simple problems to solve in any application.
* WebSockets are superior to all revisions of HTTP except that HTTP is sessionless. Typically when developers argue against WebSockets it’s because they cannot program.
* Your software isn’t fast. Not ever, unless you have numbers for comparison.
* Things like jquery, React, Spring, Rails, and so forth do not exist to create superior software. They exist to help businesses hire unqualified developers.
* If you want to be a 10x developer just don’t repeat the mistakes other people commonly repeat. Programming mastery is not required and follows from your 10x solutions.
* I find it dreadfully hypocritical that people complain about AI in the interview process and simultaneously dismiss the idea of requiring licensing for professional software practice.
> WebSockets are superior to all revisions of HTTP except that HTTP is sessionless. Typically when developers argue against WebSockets it’s because they cannot program.
What did you mean by this? Were you suggesting that interactive web apps should maintain a persistent and stateful connection to the server and use that to send interaction events and receive the outputs back, like a video game would, rather than using stateless HTTP calls and cookies and such? Why is that superior?
And sorry if I misunderstood!
> Were you suggesting that interactive web apps should maintain a persistent and stateful connection to the server and use that to send interaction events and receive the outputs back, like a video game would
That is how I design all my web facing applications now. The idea is that with WebSockets all messaging is fire and forget and that is independently true from both sides of the wire. That means everything is event oriented on each side separately and nothing waits on round trips, polling, or other complexity. In my largest application when I converted everything from HTTP to WebSocket messaging I gained an instant 8x performance boost and dramatically lower the architectural complexity of the application.
That's fascinating. You should do a writeup about it!
I had thought (perhaps incorrectly? it's not something I've spent a lot of time pondering) that a stateful connection like this is fragile in the real world compared to HTTP because it requires some sort of manual reconnection to the server on network changes (like if you're on a phone in a train or in a car), and that it would require both the server and app itself to be aware of what is dynamic realtime data, what is cacheable, what is stale, etc. Like kinda related to your other statement about state... doesn't this mean you're sharing and syncing state across both the server and the client?
Competitive video games operating over UDP are the closest everyday analogy I can think of, where the server handles most state (and is the ultimate source of truth) but the client optimistically approximates some stuff (like player movement and aiming), which usually works fine but can lead to rubber-banding issues and such if some packets are missed. But most gaming happens between one server and just a small handful of clients (maybe 100 or 200 at most?).
In a web app, would the same sort of setup lead to UI jank, like optimistic updates often flicking and back and forth, etc.? I suppose that's not inherent to either HTTP or websockets, though, just depends on how it's coded and how resilient it is to network glitches.
And how would this scale... you can't easily CDN a websockets app, right, or use serverless to handle writes? You'd have to have a persistent stateful backend of some sort?
One of the things I like about the usual way we do HTTP is that it can batch a bunch of updates into a single call and send that as one single atomic request that succeeds or fails in its entirety (vs an ongoing stream), and it doesn't really matter if that request was one second or one minute after the previous one as long as the server still knows the session it was a part of. Like on both the client and the server, the state could be "eventually" consistent as opposed to requiring a stable, real-time sync between the two?
Not disagreeing with you per se (hope it didn't sound that way), just thinking out loud and wondering how the details play out in a setup like this. I'm definitely intrigued!
It is the same level of fragility compared to HTTP because most HTTP connections now are stateless keep-alive connections. The solution to this fragility is to open a new connection after the prior connection breaks. You will know when the WebSocket connection breaks in both the browser and Node because there is an event exactly for that. That new connection request can occur via timed intervals or as needed at next message, depending upon concerns of directionality and urgency.
Transmission is not a function of state. Your application gains greater durability when those two qualities become fully independent of each other.
Transmission jank is a concern for web servers that come down and then back up with restoration of many concurrent connections. The solution to that is stagger connection establishment in batches and intervals. This is not a normal operating concern though, because how often does your web server crash on you in production if you have 10,000 or more active connections.
As for CDNs leave static asset requests to HTTP. For performance these files should be consolidated to the fewest number of requests balanced against initial rendering concerns in the browser. This should also typically be limited to initial page request.
WebSockets are often blocked by corporate firewalls/proxies, I don't think it's as simple as saying WebSocket > HTTP
To solve for that you can serve WebSockets on the same port as HTTP. Most web server application's don't know how to configure that, but I know it can be done because I doing it right now in my own applications.
The more tools between raw code and a running solution, the more fragile it is.
This issue is particularly prevalent in the Javascript world, where it isn't uncommon that when you hit build/run, half a dozen process need to occur. This is partly why I wish they'd just put native TypeScript in the browser, simplify the build pipeline by removing several steps (and, yes, TS would evolve slower/more conservatively which I also consider a positive).
WebAssembly is the apex of this issue. Super fragile to build and impossible to debug. It is what I call "prayer based development," because you "pray" it works or troubleshooting becomes a nightmare.
I have realized this morning, when I was riding in the bus, that my habit of creating a new side project every 6 months, working on it for maybe 2-3 months of relatively high productivity, and suddenly losing interests afterwards (usually not finished, or not polished), probably means that programming is an addiction to me.
No one is using my project, and I'm not exactly learning a lot from them, except in the first half of each one because whatever afterwards is just polishing (e.g. I did an interpreter project on a Python subset a year ago, and TBH the whole concept was pretty straightforward once my brain got it in the first few weeks). It's more of an obsession and that probably explains why I got burned out after 2-3 months and COMPLETELY lost interests in it. Looking at my GitHub commit history, it is always 2-3 months of almost daily commits compensated by 3 months of absolutely non-activity.
I don't think this is the right path for me if I want to leverage my side projects to get a job in low-level programming. Either I figure out how to drill deeper into each of my projects, or I need to figure out how to remove the burnout every 2-3 months. If the market is good I'd go straight to apply for system programming jobs but right now it's even tough to keep my own job.
So this is my unpopular opinions about side project programming -- if you are like me, maybe it's time to rethink the strategy. We only have one life, and I'm already 42. Gosh! Maybe I (we) should just find another hobby.
Python package management infrastructure and tooling works fine for me, and I use python frequently.
There is a relevant xkcd: https://xkcd.com/1987/
I generally don’t really care about what programming language I’m using
To me algorithms and solving problems in the abstract sense are the actual interesting part of the job
If discussing language/framework choices is the most interesting part of the job, it means I have a boring job/project/domain
The internet and software has destroyed humanity.
That my opinions are right and if you don't agree with me you truly suck!
That's not unpopular. I have the same opinion and I know it's right!
Finally, something we can all agree on!
If you consider the constraints on how it can evolve and the alternatives that has been attempted, I think web languages turned out pretty well.
I wouldn't say they're "good", but it's cool that they've managed to stay somewhat open at all.
In an alternate timeline, Microsoft might've won out with single vendor .NET everywhere. The DX would be better but everything would be way more closed.
I think openness is a must. Alternatives that were proprietary largely died out because they were proprietary. They can't survive transitions into newer platforms, like how Flash failed to make it into mobile
To be fair if they had a monopoly perhaps they would have little incentive to make DX better in order to compete.
I don't think that's necessarily true, though. Even back in the pre .NET days, when Microsoft pretty much did have a monopoly on PC desktops, Visual Studio (not VSCode, but its full-fledged big brother) had pretty awesome DX.
And having a standard UI layer (like WinForms) made GUI development way way better for both the developer (who could drag and drop shapes and easily align frames and tabs and buttons and such instead of having to try to wrangle CSS) and the average user could have standard UI looks & feels across many different apps.
The openness of the Web led to its popularity (and my career), but then every company ended up making their code style, IDE, APIs, UI layers, etc., leading to extreme fragmentation in both DX and UX that to this day is still a mess. Of course I still prefer this over a closed Microsoft monopoly (or an Apple one), but it's certainly made for a lot of unnecessary reinventions of the wheel.
It's a lot easier than you all make it out to be [0].
I'm not a 'programmer' like you all are, at best I can hack together code to get things done. I use git maybe once a year. I'm a biotech person that likes to hang out because y'all are mostly smart and the community is great here.
But man alive, this is not that difficult. Yes, it's hard to wrap your head around some nested dependancies. But it's a lot easier than any chain of protein/gene/neuron interactions. This stuff makes sense and you can edit it. My field can't really do that most of the time and it really doesn't make sense for decades (at best).
Like, I'm trying to follow along here and am mostly lost. But the few times I do know something about the code y'all are talking about, it's made out to be a lot more complicated than it needs to be.
I mean, yeah, keep that up though. Makes your bosses pay you more and lord knows those suits should be doing that and not spending the cash on rockets and shitty pick-ups.
But for real, y'all are making this out to be a lot harder than it is.
[0] this is supposed to an unpopular opinion, right?
Isn’t this just dunning kruger? Try and build something substantial for your gene science thing rather than some notebook script or CRUD app and see what complexity you run into.
JavaScript is excellent.
When I read this, the first thing that came to mind was "until you have to work with time zones". I generally enjoy working in JS, but it's such a pain point for me that I can't help but not think about it whenever I see a datetime.
It doesn't get as much mindshare as the problems with typing and frameworks and fashion trends and such, but my god, I've never seen a major, popular language with such poor support for basic time zone manipulation and storage. It is really really bad and won't be fixed until the Temporal API is stable and widely available: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
What app/problem requires so much JS time zone manipulation? I have an app that's front and back-end JS, and I do all time zone stuff in Postgres.
It's not even anything fancy, just even simple things like calendars, date pickers, meeting schedulers, etc. Sure, you can do it all serverside, but we're talking about JS here, not any external database.
JS itself can't "keep" an original timezone in a Date() object. e.g.:
`new Date("2024-04-01T00:00:00Z").toString() `
Becomes your browser's time zone, even though the input was in Zulu/UTC. Also note that a time offset (Z or +08:00) is not the same as an IANA time zone string, and that's a one-way conversion. If you go from something like Los_Angeles to -7:00, you have no way to tell if the -7:00 is due to a daylight savings time or another locale that doesn't observe American DST. And JS doesn't store either piece of info.
If you have multiple users across different time zones trying to plan an event together, JS's date handling is super footgunny (is that a word?) and it's very easy for devs to make errors when converting datetimes back and forth from the server/DB to the app UI.
And because JS is so powerful today, there are many things that should be doable entirely clientside, but aren't easy right now. For example, relative time: wanting to make a scheduler that can search for +/- 7 days from a certain selected datetime. What is a "day"... is it 24 hours * 7? What do you do about daylight savings time boundaries? Or if you go from Feb 1 to Mar 1, is that "one month" or "28 days"?
These may seem like edge cases to you, but when it comes to datetime... really it's all edge cases :( You run into half-hour timezones, daylight savings time (which is a set of rules that vary over time even within the same country, not just a static offset per timezone and date range), cross-locale formatting, etc.
A lot of this is very doable with existing libs like Luxon, or datefns for simpler use cases, but they are fundamentally hacking around the weaknesses of the built-in Date() object with their own data structures and abstractions.
For me as a frontend dev working on ecommerce sites and dashboards, I've had to correct more datetime bugs from other JS devs (including some with decades of experience) than any other issue, including React state management. The tricky part is that a lot of the weaknesses are non-obvious, but it really is buggy and weak as heck, especially compared to many serverside languages. It's based on a very early implementation of Java's dates, which has since gotten a lot better, but JS's Date was still frozen in time.
Thankfully, most if not all of these issues will be solved with the Temporal API once it's stable... it's been like 10+ years under development, since the Moment days. Can't wait!
Interpret everything. Perl, Tcl, irb, groovy on the desktop.
On the iphone, only BASIC. :-)
"Just Run, Baby."
vast vast majority of code written is what should have been compiler output
Web was a mistake...
This is surprisingly popular among programmers
Found the Gopher holdout
Dynamic languages are great and TypeScript is the stupidest thing ever.
- Diversity doesn't really make your tech team better or worse. Prioritizing for diversity when you're hiring makes your team worse.
- Cache invalidation is relatively easy; it's choosing the right strategy that is hard.
- You hate SPAs because you conflate them with running javascript, and you hate running javascript because you conflate it with predatory advertising strategies. You should hate predatory advertising strategies.
- IP clauses in job contracts contribute to the formation of monopolies and monocultures; they should be collectively fought harder than they are.
- Teaching juniors should be < 10% of a developer's job. If you want an instructor, hire an instructor.
Trying to debug and find pointer and memory leak problems.