That line about code moving into streets and grids really stuck with me, Fady Desoky Saeed Abdelaziz. Makes you pause before calling something a small design choice.
I No Longer See Software as “Just Software”
4 Comments
@[Ben Miller]
Thank you, Ben — I really appreciate that.
That was exactly the idea behind that line. A lot of what we call “small” decisions in software quietly scale into real-world systems.
I think once you connect code to its real-world footprint, it becomes harder to treat design choices as purely technical anymore.
Please log in to add a comment.
I usually think of it like lifting weight.
Is the problem actually heavy enough to need a machine, or can a person handle it just fine?
And if it is heavy, does it need to be lifted over and over again, or just set down once?
At Gnoke, we lean toward lightweight solutions — ease of use, clarity, and freedom over complexity. If software doesn’t make things simpler or freer, it’s probably the wrong tool. My honest opinion.
@[edmundsparrow]
The “lifting weight” analogy is really powerful — especially the point about frequency.
I also strongly agree on lightweight solutions. Most problems today aren’t about what software can’t do, but about how heavy and intrusive it quietly becomes.
Really appreciate this perspective, edmundsparrow.
Please log in to add a comment.
Great one, Fady!
For the past few days, I have been thinking about how much electricity and computing resources systems like ChatGPT consume at scale. After reading your post, it makes me question whether the cost of running AI is truly worth it. Maybe it is, or maybe it isn’t. I guess only time will tell.
@[Vignesh J]
That’s a really important question, and I’m glad you brought AI into the discussion, Vignesh.
I think the real issue isn’t whether AI is “worth it” in general, but what kind of AI is worth it.
There are use cases where the social or scientific value might justify the energy cost (healthcare, accessibility, education, climate modeling, etc.).
And there are other cases where we’re clearly over-using heavy systems for problems that don’t truly need them.
For me, responsible software starts with being honest about that trade-off:
not just “can we build it?” but “should this problem consume these resources?”
Maybe the future isn’t less AI — but more efficient, more intentional AI.
I’m curious: what kinds of applications do you personally feel justify that cost?
Please log in to add a comment.
@[edmundsparrow]
And that’s actually a really good concrete example, I think cases like voice notes vs voice-to-text show exactly what you described earlier: sometimes we build more system than the problem really needs.
For me, the responsible question isn’t “can both exist?” but “what real friction are we removing, and at what cost?”
If the goal is just capturing a thought, a lightweight voice note might already solve it.
Adding transcription, storage, models, and pipelines should only happen when it clearly unlocks new value — not by default.
Otherwise, we slowly turn simple human actions into heavy infrastructure.