# https://iio.ie recent posts backup
older entries at @/archive.html.2023 entries:
2024 entries:
# x11lock: my own minimalistic lock screen for x11
i need one functionality from a lock screen: prevent people "accidentally" interact with my machine such as nosy kids or coworkers. but i generally trust people around me so this is more about putting up a fence rather than having an impenetrable protection. and when i come back to my machine, i want to quickly and conveniently unlock the screen (ideally one handed) so that i can continue where i left off. but most lock screen software want you to enter your login password to unlock it and don't allow for customization. for me that's overkill, let me just configure some secret magic key combo to unlock it and i'm happy.
since most lock screen software doesn't let you do that and are super bloated, i wrote my own. writing a screensaver is surprisingly simple. you need to do 3 things:
you don't even need to implement stuff like timeout configuration, just use `xset s` to configure those and have the software follow those settings. in fact, x11 will announce when the screensaver needs to activate, the daemon just needs to listen for those events. even daemonizing is pretty much a single line in c with its `daemon()` function from the libc. all this leads to a very straighforward 220 lines of c code: https://github.com/ypsu/cfg/blob/master/utils/x11lock.c.
this is how i configure and start my screensaver daemon:
xset s noblank xset s 900 xset dpms 1200 1200 1200 x11lock start
it also supports manual activation. "x11lock activate" just sends the daemon a sigusr1 which starts the lock window immediately. i have this binding in my .i3/config:
bindsym $mod+Mod1+l exec x11lock activate bindsym $mod+Scroll_Lock exec x11lock activate
i admit there's lot of stuff missing from it like the ability to customize the key combo (rather than being hardcoded) and proper multimonitor support. but i don't need those so this is completely fine for me for now. it probably wouldn't be too hard to add those if i really wanted to.
btw, even xscreensaver's author thinks most screensaver software is bloated:
anyway, i'm using this minimalistic screensaver at home for a while now and works perfectly. i'm using a key combo that is hard to enter randomly by mashing keys. it's sometimes even hard for me to enter intentionally. but it was fine after i got the hang of it.
published on 2023-01-02
# webshooter: an idea for a simple multiplayer first-person arena shooter
in @/shooter i've already talked about what shooter game i'd like to see. i kept revisiting that fantasy and was wondering if there was some core part i could extract and implement. i think i've found the part i could reasonably implement sometime later. so in this post i sketch the idea for my own future reference.
here are the key features:
the key experience i want to recreate through this is the suspenseful dance in shootmania. suppose you are in a small arena with another player both with empty charges. now both of you have to wait until the guns charge. but when are you going to shoot?
do you wait until your first charge is ready and then shoot away? you really have to aim well because you won't have another chance for a while again. or do you keep waiting until your charges are full and then you spray-shoot the other for a high hit probability? but then you are exposed to the other player who might start shooting sooner and hit you.
it sounds very suspenseful! and that's the core gameplay i want to recreate with this small game.
each of the above features should be simple to implement on its own. i just need to find a free month or two to learn all this modern webgl stuff and work through the above feature list.
published on 2023-02-01
comment #webshooter.1 on 2023-02-02
communication is done through peer to peer webrtc without any central server.
Like, how? Paxos?
comment #webshooter.1 response from iio.ie
nah, nothing fancy. i meant that i don't have to run the game loop on my server. the first player creating a lobby room will be the server and will just replicate data between the other players. so there will be a central server but it will be one of the players.
the only support needed from my server is to help establish the webrtc connections. my server (this blog) already has a feature for doing that in a generic manner, i just need to document it in the form of a post.
comment #webshooter.2 on 2023-02-04
Never heard of WebRTC before, very interesting. I like the idea of good-old peer to peer connections, but unfortunately, this will be very hard to get established broadly, because some ISPs still do not support IPv6 -.- You would therefore need a relay server, wouldn't you?
comment #webshooter.2 response from iio.ie
this is already well established and should work in most places even without a relay server. i've uploaded a chat demo to @/webchat in case you want to give it a quick whirl.
# webchat: a webrtc demo of a peer-to-peer chatroom
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/webchat.html.
this demo implements a peer to peer chatroom. after joining a room, the messages are passed peer to peer with no involvement with this server. join a chatroom from two completely different browsers, computers, networks and you should still be able to chat as long as you join the same room.
[non-text content snipped]
# webrtc
this is implemented via webrtc. if you look at the network console in your browser, you won't see requests going towards my server after the initial setup. specifically you won't see anything because at the time of writing most browser's network console doesn't include the webrtc data.
i have to admit, i don't know much about webrtc. all i know is that it does some serious black art networking magic to establish connections. it can punch holes in nat via stun servers, use relays, etc. in this demo i use google's public stun service but avoid using a relay because i'm not sure there are free ones for that. but i couldn't come up with a setup where i actually needed a relay server, webrtc is pretty smart at establishing direct connections.
if i understand correctly, webrtc is already well supported in most browsers. it's used for videocalls, games, webtorrent, peertube, internet-of-things, and many other things. and it's relatively safe out of box thanks to its mandatory encryption.
its api is a bit complicated but roughly this is what you need:
the only tricky part is implementing the "server transports data" part of the above description. i think this is called signaling in the webrtc jargon. this part is left unspecified in webrtc so each web service has to figure this out on their own.
# signaling
the signaling part only needs to transport a few kilobytes of text data between the two parties. you could do it even via instant messages. but of course it's much nicer if computers can do this automatically.
at this point in time there probably exist some free signaling services already. but certainly they aren't easy to find and often they come with complicated api. so i've created a very simple service in my server for my demoing needs.
there are two operations:
basically you can pass arbitrary data around:
terminal 1: curl -X POST 'https://iio.ie/sig?get=someidentifier&timeoutms=600000' terminal 2: curl -X POST 'https://iio.ie/sig?set=someidentifier' -d $'hello world\n'
as soon as you run the second command, the first command returns with the data the second one uploaded.
the get operation allows for a `timeoutms` parameter. you can set it to 0 to immediately return if there is no active post operation waiting.
each text can be read out only once. if you want to share data across multiple clients, you have to keep re-uploading it.
multiple set operations for the same identifier will be queued up.
# chat service
now we have everything to create a simple webrtc based chat service.
and the rtc connection establishment looks like this:
if the browser didn't find an offer then it becomes the server. this server peer now needs to keep listening for new client peers. so it does this in a loop:
there's some additional complexity in handling various edge cases, disconnects, error handling, formatting, ui management, etc. these can get quite hairy so i didn't really bother too much with the details for this particular demo. anything unexpected and the demo just freezes up. but at least should be good enough to try out things.
for reference the hacky source code for all this is at @/webchat.ts.
# history
in @/webshooter i've talked about how i want to create a peer-to-peer multiplayer shooter game. this hopefully demonstrates how that could function with webrtc.
published on 2023-02-05
comment #webchat.1 on 2023-02-26
wow, thanks for this demo (i am the one from the other posting). i read about the "chat control" plans of the EU and am concerned, that basic rights like save communication are taken from people. client-side and server-side control could become obligatory for the major messaging platforms. a webrtc based solution could be nice, but without the use of any relay-servers.
comment #webchat.1 response from iio.ie
huh, i didn't know about those plans, thanks for mentioning. it sounds a pretty dumb idea. but look on the bright side: every time bureaucrats do something like this, the technology evolves, see what happened in the file/music sharing scene. as a response the more secure, less tractable, more decentralized, peer-to-peer messaging systems will get more popular. so the joke is on the bureaucrats. having such a chat control plan will be a net benefit for society due to its obvious unintended effects. :)
also note, this demo does not use a relay server. it needs a central service to help establish the connection between the two peers. once established, the peers communicate directly with each other with no involvement of any central server.
# firing: instead of firing, try moving low performing employees to different teams
if you are a big company and you have a lazy or unproductive employee, what should you do?
my ideal company structure is described at @/ranks. the rules i propose in there already deal with low performance in a natural way by allowing the ranks to go up and down automatically.
but what about a traditional hierarchical company where ranks cannot go down and each person has a manager? the usual process is to put the employee on a performance improvement plan and if that fails, fire them. but is there more that you can do?
note that in this post i will be mostly talking about knowledge workers but similar thoughts apply to others as well.
# the proposal
i'm proposing a new step between the performance improvement plan and the actual firing event. in this new step the low performing person would lose their manager.
the person then no longer belongs to a team and has about 2 months to find a new team. if during that time they can find a new supporting manager who accepts them as a report, they keep their job. and if they still don't have a manager after 2 months, they lose their job.
basically during those 2 months the employee's new job is to find a way to make themselves useful to the company. this should be enough time to decompress from the previous assignment, refresh old skills, and interview at new teams or for new roles.
# the reason
a lot of times the employee is a low performer due to the environment they are in.
these are often completely unconscious and the employee doesn't realize the underlying source of stress. in a big company different teams often have different culture. it is possible the employee would thrive in a different team. and onboarding an active employee often needs significantly less time investment than a completely new hire. the employee might bring useful institutional knowledge with them as well. so this is very useful for the new team.
# seeking teams
large companies usually have an internal job board. that's where the employees can find new transfer opportunities. the person seeking a new team could find new teams there. or if the employee is famous enough, they could just broadcast a "hey, anyone want to hire me?" message and rely on good luck.
if a team is hiring a new member, they usually only see a resume and whatever impressions the interviewers had from the person. that's all the information they have. that's what they need to make the hiring decision on. however with this process, teams can try out the new person even for a week. they can give them assignments, the person can participate in meetings, etc.
maybe you, as the manager, see in the person's file that they had low performance in their previous two teams. maybe the person just couldn't get excited about working neither on ads nor policy compliance systems. but maybe the person could really thrive in your low-level infrastructure team. you are not sure though. this person in limbo could become a part of your team temporarily. there's no legal shenanigans to worry about because the employee is still legally employed. your confidence whether the person is a good fit or not should significantly increase after the trial period.
why should the company not give their employees the opportunity to thrive? why do companies throw away completely good candidates after they matched them to the wrong team?
# reality
i haven't seen such a process implemented anywhere. big companies are simply insensible. they usually reduce everything to money. a person not assigned to a task is bleeding money for them and that must be eliminated. this would be a long term investment and wall street cannot allow that.
it is very important for a company to avoid falling into this sort of thinking. but that's a thought for some other day.
published on 2023-02-25
# land: you can't own land, you can only lease it from the government
there's a thing bothering me about private property ownership. what do you do if the owner doesn't do good caretaking and this is causing a net drag for the environment around it? you could own an old house in the middle of the city centre and refuse to make better use of it such as a shop or a condo. but i also like the idea that if something belongs to you, you should have the freedom to do whatever you want within some reasonable boundaries.
# expiring ownership
i think there's a solution to this! you never fully own a land but you simply lease it from the government.
government would lease each land piece for 20 years. after 20 years the owner would need to renew the lease. usually such renewals would be granted. but during such renewal the government would have the chance to review whether the current usage still makes sense. if it doesn't, it could repossess it and put the property to a better use.
that old house you own in the city center? the city government could take that away and have a condo built there to improve the housing market. hopefully you took this into consideration when you acquired the lease 20 years ago.
however not granting the renewal would need lots of process to ensure the government doesn't abuse its power. perhaps even public approval for not renewing could be a requirement. typical reasons for not renewing a lease could be:
# pricing
the owners of these leased properties would need to pay tax on the land. to make administration and the expectation very simple, i'd require the tax to be paid daily. no need to fill out complex documents once every year. it would be a continuous process so if something is wrong, the government can notice and fix it right away. and if the owner stops paying, the government can intervene quickly rather than spending lot of resources on trying to collect debts.
the increase of tax would be limited. but during the renewals the government can assign the land to a different zone. such a zone assignment change could cause a significant tax increase. owners would be encouraged to give the lease to someone else who can make better use of the land.
the government shouldn't gorge on this though. a bagel shop in the city center is more useful than another bank branch. if you make the tax too high, all you get is banks who can print their rent money (at least in a society that doesn't ban money-printing).
# land tax is fair
land is an exclusive resource so i think taxing its ownership is fair. but i wouldn't expect a lot of tax income from this. my current ideal system is @/autotax and in that i'd expect most tax coming from money hoarders.
one advantage of land tax is that it's one of the primary ways to legitimize government currency. in @/autotax i talk about a new currency. but what would motivate people to use it? well, if you live somewhere, then that place must pay land tax in government currency. so you need to pay your rent in government currency. which means you will need to acquire government currency. thus people trade in government currency.
btw, interesting side note. what made the crypto currencies popular? what motivated people to acquire these? well, speculation and some ease of use, sure. but also ransomware! if ransomware asks for cryptomoney then suddenly you will need to start acquiring it and thus its trading volume increases. at least this is a neat theory put forward in https://locusmag.com/2022/09/cory-doctorow-moneylike/.
# long-term use
anyway, back to leasing land. such setup might discourage building large, complex buildings that might take 10-20 years to complete. such long building durations are very rare though. often the slowness comes from inefficient planning. the short terms would encourage leasers to build quickly otherwise they might be enjoying the hassle-free ownership only for a short term. and they would also discourage building expensive high-rise condos. i think that's fine. those are very brittle and antisocial anyway.
however, if needed, maybe there could be an exceptional process for 50 year long leases. but then the government would be more involved in the process. it would ensure solid plans, long lasting materials, etc.
# compromise
i don't know, maybe this is a dumb idea. but it can be really frustrating that some rich people keep inheriting vast estates and then let them rot. while poor people can't find housing anywhere and the government can't do anything because it can't touch private properties. so this would be a compromise. people could have their own place but not forever.
published on 2023-03-04, last modified on 2024-12-16
comment #land.1 on 2024-01-24
the expiring ownership system already exists in serbia and probably other ex-yugoslav countries afaik. i’m not that knowledgeable about this topic but i do know that you have to renew land ownership in a similar way to what you’ve described.
for example, the house i grew up in, its land was bought in the 60s so the ownership of the land will have to be renewed after 99 years so in like 35 years or something.
btw i accidentally stumbled upon your blog thanks to a reddit post, so i skimmed through the posts and i honestly quite relate to your thought process. ill add your rss feed to my reader so keep at it with the interesting topics :)
# commitments: absolute commitments can help remain virtuous
you create a wonderful and clear website. it gets popular. to make sure you can pay its costs, you add a clearly marked ad. 10 years later the site is full of ads and barely usable.
you are a politician who wants to change the world for the better. a lobbyist invites you to a lunch for him to better explain his point of view. it's just one lunch, that's not evil. 10 years later, lobbyists are taking you to luxury vacations, giving you luxury goods. in exchange you must vote to please them.
you are a farmer and you need a plot for your cattle. so you cut down some trees in the nearby rainforest. 10 years later you are cutting down exponentially increasing amount of the forest. 100 years later the planet has no oxygen supply.
you are a poor university student who is looking to make some little pocket money. you partake in an experiment where you have to shock another participant if they make a mistake. you start with a very small voltage. it's small, not hurtful, you go ahead. 1 hour later you are issuing almost deadly voltages.
# step by step
in every scenario the decay happens step by step. the individual steps are usually completely justifiable in the context of the previous steps.
it's just one more ad, our a / b tests show it's no big deal for our users.
it's just a little gift, that's not that different than accepting a nice lunch.
it's just a few more trees, plenty will remain.
it's just a bit more voltage, the other person can surely take it if the previous ones were fine.
none of the small steps make the person realise that something is going afoul. then even in the fully decayed state the person will not see that they are doing something wrong. in their minds they remain virtuous people.
# prevent decay with commitments
once you get into this decayed state, it's very hard to get out of it. it would create a huge cognitive dissonance, mental pain. it's easier to find justifications why continuing whatever you are doing is fine.
it's best if you can prevent decaying in the first place. and one way to do so is to make absolute, easy to verify commitments publicly. make sure breaking the commitments have repercussions. this will make it painful for you to go against them because it would result in public humiliation. and if you do go against them, it's clear signal for the others that your values have been compromised.
commit to not show more than one clearly marked ad on your website. if you add more, then you will donate x million to a competing startup or a charity.
commit to never interact with lobbyists outside some pre-determined procedure. if you fail to do so, you will volunteer full-time in a homeless shelter for a week to make amends.
commit to never cut more than 1,000 trees per year. if you fail to do so, you will reforest the double amount somewhere else.
commit to not electrocute with more than 20 volts. if you go over, you give up the payment from the experiment.
this hedges against the failures a little bit. there is at least some consolation price for the people interested in your promises.
# links
there are lot of interesting sites about this topic on the internet.
published on 2023-03-18
# genai: generative ai is neat and we shouldn't limit its development
nowadays stable diffusion and ai chatbots are all the rage. i'm not sure what to think of them so thought i write a post for myself to organize my thoughts a bit.
# improvements
consumer technology seemed to slow down it's improvement rate. then suddenly within the span of few years all these new image and text generation tools appeared. now the question is: will such a rate of improvement continue? i see a lot of people saying (even mathematically proving) that these tools are dumb and very limited. on the other hand there are people fearing these tools and want to legally limit their research.
i'm in the camp who thinks we are very early stage and these tools will rapidly improve. it just takes one good idea executed well to make a large leap of quality. like what happened few years after the "attention is all you need" paper. i think it's reasonable to expect that people will find ways to make these models even smarter with the right tweaks. or maybe someone will try to integrate this tech with brains and give themselves (or animals?) superhuman intelligence. and with that they can rapidly push the state of art in both the model and biological integration. there's no harm in entertaining the craziest ideas so that one is somewhat mentally prepared to the ai takeover.
as for limiting this tech: i'm in the camp who says it shouldn't be limited. yes, our society might not be prepared for all this tech. but you can't really ban this technology. they can be further developed in secret in the labs of companies or governments. and then they can attack a woefully underprepared society with all the deepfakes and all. i say it should be open and let's adjust quickly to the new reality even if it's painful.
# coding
looks like the nature of the programming jobs are changing. rather than typing out full code snippets, we'll just give a rough outline of the desired algorithm in comments and ai will fill the rest. or the ai code will turn commit messages or code review comments into patches that the humans can simply apply with one click.
i remember experienced developers advocating "never comment what code does but why". 4 years ago in @/complexity i argued against that. you should write out a rough outline of your intent (alongside the why). code is just an implementation detail, the intent is what matters. looks like the ai tools are making people embrace this approach so i feel vindicated, yay.
i don't think dev jobs are going away in the next two decades. your vanilla computer user won't have enough programming experience to be able to reason with ai with enough clarity. you will still have the devs having meetings trying to figure out what system they want to create and to give status updates. perhaps the job will evolve to be more meeting and status update oriented if the coding itself becomes that efficient.
nevertheless, i don't use any autocompletion (not even in shell, see @/autocompletion) over a decade now and i'm not planning to start doing so. i simply find joy in coming up with the right variable names, the right structure, and then simply typing it all in raw. i don't care if i'm slower than the others. it's like a videogame for me. it's a dopamine hit when i spell a hard-to-spell word right the first time (similarly to headshots in videogames). would you prefer automating the act of playing a videogame? needing to remember all the exact details and spelling of functions is very healthy memory exercise for me anyway. it's more entertaining than solving sudoku puzzles for staying sharp.
# creative works
one usecase for ai that immediately pops into my mind is converting all the current 24 fps slideshows into proper 60 fps animations. i hope that ai will be able to do this in an increasingly better fashion. i still don't get why people hate 60 fps but maybe ai will be able to convince people to see the light of animation compared to slideshows that people today call movies or anime. and then apply a ncis-like enhance algorithm on the pictures and you can have 8k level quality even for old crappy youtube videos. nice!
i also expect that comic and manga drawing would become easier. i don't think writers needs text generation since writing text is easy. but drawing and coloring takes time so automating that could be nice. as long as the tools are masterfully used, this could lead to more high quality content in less time. that's quite nice if one is into binge-consuming. but i feel ambivalent about this too since there's already an endless see of high-quality content on the internet. why is it important to always consume the freshest bit?
if content is endless then maybe in future the consumption itself won't be the sole source of joy. there will be more emphasis on the act of participating in the generation, sharing, talking about those creations together. the nature of entertainment itself might change. we might go back to a more social life after all the isolation computers brought on us.
# copyright
however if creation will become easy then copyright might be challenged more. it's already somewhat a gray line what happens if you have an image generated by a model trained on a copyrighted dataset.
if there's one thing i wish to come out of all this disruption then it's the abolishment of these stupid intellectual monopoly laws. i already wrote about the distaste of them in @/intmon.
maybe thanks to ai there will be too much content that copyright owners won't have the resources to keep up with all the violations. or ai can slightly adjust existing content to the point at which you would have hard time proving copyright violation. e.g. use ai to generate an anime from a movie with replaced names and all. the whole copyright could become unenforceable. and maybe then it will become obsolete and the laws around them will be dropped. if ai can achieve this then i'll be happy about them.
# laws
an alternative approach to solve the new copyright issues is to make even more complicated laws. maybe the laws have to be so complex that humans can't even comprehend them. but worry not, ai comes to the rescue. our lawmakers will just generate the laws to keep up with complexity.
worrying that won't be enforceable? worry not, ai comes to the rescue. the ai will come up with random laws and then we'll use ai to enforce them. at some point people will wake up to robots knocking on their doors informing them that they broke some law no human knew before.
at least the lawyers make a fortune, yay.
# humanity's future
some people are concerned about the future because ai will take people's jobs. i'm not. i found that humanity excels at producing bureaucratic nonsense. so in the end there will be always some bureaucratic job to do for everyone. nowadays even being unemployed comes with so many requirements, forms to fill, queues to wait, it's almost a full time job. one big time sink of humanity will be reviewing the ai output because we will never fully trust it. in a way we will be slaves to the ai. but if nothing else, just ask an ai to come up with bullshit jobs and i'm sure it can deliver. i wrote more about this at @/bureaucracy.
it might be a painful transition but i think we'll be fine. it's just that we might end up living in some sad kafkaesque nightmare. i still hope that we will slowly move towards the idea in @/utopia but i wouldn't really bet on it.
# summary
in short i think ai and the disruption that comes with it is good and it's better if we just get over it soon. personally i try to avoid using ai in my everyday life. but i expect rapid growth in this area, so who knows, maybe one day i'll embrace it.
published on 2023-04-04, last modified on 2023-04-09
# myopia: humanity will be installing violet lightbulbs everywhere
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/myopia.html.
i'm nearsighted which means i can only see perfectly up to very short distances. i have an about -5D myopia which means i can see perfectly only up to 100 cm / 5 = 20 cm. it means i have to wear glasses in order to function in the world. it's annoying but it's not the end of the world.
# measurements
i feel that every time i got glasses for 100% correction, my eyes quickly got used to them and then my eyesight worsened faster than before the new glasses. unfortunately i never really measured this. this lack of measurement annoyed me so in 2020 and 2021 i decided to measure my eyesight daily.
given that my vision hovers around 20 cm, it's very easy to do it by hand. you can do it with a measuring tape. you put some detail on it, pull it from your head until it starts becoming blurry and then note the distance:
[non-text content snipped]
initially i was quite enthusiastic and did a lot of measurements. and then i dropped to daily and then i stopped altogether. here are my measurements:
[non-text content snipped]
it's super noisy data. a lot depends on the light levels, time of day, tiredness, etc. i'm just not rigorous enough to create a stable environment enough for this. but if i squint enough, i do see a pattern which suggests that during winter my eyesight worsens more.
note to self: all the data is in @/focusreach.data. the data format is `yyyymmdd-hhmm central left right comment`. and the script to convert the data to the graph is at @/focusreach.plot.
another personal note, here are my official measurements done by professionals since i started tracking (i'll keep this updated whenever i visit an optician):
# endmyopia
as far as i'm aware, the scientific consensus is that the myopia is not reversible through natural means. then there's the endmyopia community which claims otherwise.
during my measuring time i followed this endmyopia community. their method is roughly described at their wiki at https://wiki.endmyopia.org/wiki/Guide:Start_your_improvement_here. when doing close-up work use separate glasses that corrects for that distance. and be a lot outside, do lot of "active focusing", which is a special exercise.
the forum does have some people for whom the method did work. but it requires lot of constant work. and from what i can read from the posts, it's very easy to regress. you stop the active time investment and you lose your progress. i recall that even jake, the community's founder, reported regression at some point when he stopped the exercises for a longer while.
i'm totally not cut out for investing the time into this and then maintaining that time investment. i've thought a lot about this but gave up even before i started.
# surgery
i did consider surgery too. lot of people went through it and they are happy with the results. but i also know some folks for whom the eyesight regression continued even after the surgery. one person was about 40 years old, -9-ish eyesight, got surgery and after a few years she had to wear glasses again.
given i noticed the regressions after each stronger glasses, i imagine i would fall into the same category. so a surgery would be a temporary relief. and it's a surgery which comes with non-trivial risks. lot of people report overly dry eyes and double vision after such surgeries. i already experience dry eyes sometimes, it could become even worse. just writing about it makes them dry again, ugh.
another aspect i fear losing is the good closeup accuracy. i spend a lot of time freewriting (see @/inspiration). that means lot of closeup work that i do without wearing any glasses. currently this is very convenient and comfortable activity. but i fear that after correcting my eyes, this might become more tiresome or i might start needing reading glasses for this much sooner than without the surgery. i also like having the perfect closeup vision during intimate times too.
given that i can live with glasses and i like the extra closeup accuracy, for now i decided to avoid any surgeries.
# alternatives
i could wear contacts too but i'm too squeamish about them (or anything that comes near my eyes). i guess i could get used to them after a while but dunno, i like the simplicity of glasses.
and i'm aware of bunch of alternatives to lasik operation: lasek, lens implants, epi-lasik, lens exchange, etc. dunno, they feel similarly scary as lasik to me if not more.
# acceptance
most of the time i don't even notice the glasses are on me. however they get annoying in some situations:
the situations are quite limited. and they are quite rare events for me given that i just live at home in front of the computer anyway. so i can just accept the tradeoffs here and move on. which i'm doing for now.
# glasses
another annoying thing about glasses is that they are super expensive for some stupid reason that the internet can explain better than me. your insurance might cover some of the costs though. if that's not possible, there are dozens of websites where you can order from for relatively cheap. you can probably order 3-5 of them for the cost for a single eyewear in the local shops. surely at least one of them will be a good fit.
# kids
i skimmed quite a few articles about myopia progression. the recent studies suggest that screen time doesn't really correlate with the progression. all that matters that kids have to be outside long enough every day.
so if you are looking for a rule of thumb for screen management then i'd say this: only allow screen time after some adequate outdoor time. the amount of screen time probably doesn't matter too much from the eyesight perspective. (the allowed content on the screen is a completely different topic though.)
while i try to strive for this with my kid so at least the problem is less severe for the next generation. but boy, managing kids is super hard. and if he ends up needing glasses, there are some non-invasive treatments that claim to slow the progression. i haven't done much reading on this yet, but i might look into using those high tech dims lenses for his glasses. i hope there will be even more reliable advice available by the time he gets to this phase. anyway, we'll see how things go.
# my prediction
here's one relative recent attempt at explanation of myopia mechanics that i found (from 2016): "Violet Light Exposure Can Be a Preventive Strategy Against Myopia Progression", https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5233810/. the only downside that it still talks about chicken, not humans.
there's one interesting implication i'm making from just alone the title: all those contact lenses and glasses that filter out all the "harmful rays" from the sunlight? they might be making our myopia worse. the article actually confirms this, see section 3.6.
i predict one day we will be replacing our lightbulbs with violet light emitting ones and they will be always on inside. once that's done, the myopia problems might subside a little bit for the next generation. at least until our grandchildren forget why those lights are there in the first place, remove them for efficiency, and then the myopia epidemic starts again.
that, or after the ai singularity we can edit myopia out of our genes, assuming the machines allow us to continue living and breeding.
# links
for my own reference, here are some random links i've collected over time. maybe one day i'll get to read them:
published on 2023-04-15
# stringids: map strings to a numbers and use those in string heavy apps
suppose you are working on a compiler. you might be doing a lot of short of string operations. each variable has a string identifier after all. if you keep looking up variables via strings, you will have the following problems:
if these strings don't grow unbounded, you could simply map each string to a unique number and use those in your data structures. once you work with integers, all the above issues go away:
now the question is, how do you implement mapping strings to integers and back? in this post i'll explore implementing this in go. in @/dynmem i mentioned pointers should be avoided in performant code so i'll focus on a pointerless solution.
# interface
let's compare several implementations of the problem. for simplicity let's go with this interface:
type ID int64 // Mapper is an interface for mapping strings to integers and vice versa. // As a convenience, the empty string is always mapped to 0 and vice versa. type Mapper interface { // Map maps a string to an ID. // if s was seen previously, the previously allocated ID will be returned. // otherwise a new ID is allocated. Map(s string) ID // Unmap returns the string that represents ID. // it is a panic-level error to provide an ID not returned from Map. Unmap(id ID) string }
# motivational example: acl system
linux users and groups is an example where a similar idea is used. each user is assigned a number in /etc/passwd and each group is assigned a number in /etc/group. then functions like geteuid() or stat() only return those id numbers rather than strings.
but let's look at how you could implement an optimized acl system with the above interface. you might start out like this:
type SlowACL struct { members []string } type SlowACLs struct { acls map[string]SlowACL } func (acls *SlowACLs) IsMember(acl, member string) bool { for _, m := range acls.acls[acl].members { if m == member { return true } } return false }
if you have millions of users, this can get pretty rough. now the gc must visit millions of nodes every gc cycle. and the member lookup needs to do a lot of cache unfriendly string comparisons.
now compare that solution with a stringid based one:
type ACL struct { members []stringid.ID } type ACLs struct { acls map[stringid.ID]ACL idmapper stringid.HashMapper } func (a *ACLs) IsMember(acl, member string) bool { query := a.idmapper.Map(member) for _, id := range a.acls[a.idmapper.Map(acl)].members { if id == query { return true } } return false }
assuming the idmapper has a constant gc load, this solution creates much less load on the gc. there's only one pointer per acl group. there are tricks to avoid even that but that's outside the scope of this post.
and also note the query loop. it's just a cache friendly linear scan. it's pretty performant compared to the previous solution.
# testing
i'll implement 3 variations of mappers. to doublecheck my implementations, i'd run them through this function:
func test() error { for _, m := range []Mapper{&SimpleMapper{}, &IndexMapper{}, &HashMapper{}} { fmt.Println("testing a mapper:") for _, w := range []string{ "apple", "pear", "orange", "orange", "apple", "peach", "each", "orange", "kiwi", "apple", } { id := m.Map(w) fmt.Printf(" %8s %8s %16d\n", w, m.Unmap(id), id) } } return nil } func main() { if err := test(); err != nil { log.Fatal(err) } }
and then i'd spotcheck the output to confirm it makes sense. i could create proper testcases too but meh, this should be enough for the demo purposes.
# simple implementation
the simple implementation could be done via maps:
type SimpleMapper struct { idmap map[ID]string strmap map[string]ID } func (m *SimpleMapper) Map(s string) ID { // initialize if needed. if m.idmap == nil { m.idmap, m.strmap = make(map[ID]string), make(map[string]ID) m.idmap[0], m.strmap[""] = "", 0 } if id, ok := m.strmap[s]; ok { return id } id := ID(len(m.strmap)) m.strmap[s], m.idmap[id] = id, s return id } func (m *SimpleMapper) Unmap(id ID) string { return m.idmap[id] }
and the test output looks as expected:
apple apple 1 pear pear 2 orange orange 3 orange orange 3 apple apple 1 peach peach 4 each each 5 orange orange 3 kiwi kiwi 6 apple apple 1
straightforward and fast but it comes with two problems.
so let's explore different alternatives.
# all strings in one buffer
what if we had all strings concatenated in one long string and the IDs would be just offsets? with such an implementation there would be zero fragmentation. and there would be no memory pressure either because there would be only one pointer to be stored, the long string.
the problem is that every time we append a string to the long string, it might need a reallocation. but if you double the buffer every time you run out of capacity, the running time will be still amortized O(n). so this string appending shouldn't be a problem.
strings are immutable in go but fortunately strings.Builder is willing to hack around this limit. with the Builder struct we can have efficient string appending.
# IDs are offsets
we had this definition of the ID:
type ID int64
for simplicity, let's assume the total length of all the strings is less than a gigabyte so it fits into a 31 bit integer. then the lower 32 bits of the ID is the start offset, the upper 32 bits are the end offset.
an Unmap operation could look like this:
type IndexMapper struct { s strings.Builder } func (m *IndexMapper) Unmap(id int) string { if id == 0 { return "" } start, end := int(id&0xffffffff), int(id>>32) return m.s.String()[start:end] }
# index mapping
suppose all you have is that long string buffer and you want to map a string to an ID. there are two cases:
implementation-wise this would look like this:
func (m *IndexMapper) Map(s string) ID { // return from buffer if possible. if start := strings.Index(m.s.String(), s); start != -1 { return ID(int64(start) | int64(start+len(s))<<32) } // append string to the buffer. if m.s.Len()+len(s) > 1e9 { panic("stringid.IndexMapper grew too big") } m.s.WriteString(s) // must re-search here to guarantee the first entry is returned. start := strings.Index(m.s.String(), s) id := ID(int64(start) | int64(start+len(s))<<32) return id }
and the test confirms this works:
apple apple 500000000 pear pear 900000005 orange orange f00000009 orange orange f00000009 apple apple 500000000 peach peach 140000000f each each 1400000010 orange orange f00000009 kiwi kiwi 1800000014 apple apple 500000000
the only problem of this approach is that this is that mapping gets very slow as the buffer size grows.
# hashing IDs
to address this, let's introduce a hashmap for finding strings quickly:
type HashMapper struct { len, cap uint64 htab []ID s strings.Builder }
we must implement our own hashmap because map[string]ID would create gc pressure.
a Map operation would consist of these operations:
for simplicity let's keep our hashtable power of two sized. we need a string hasher:
func strhash(s string) uint64 { var h uint64 = 5381 for i := 0; i < len(s); i++ { h = 33*h + uint64(s[i]) } return h }
and the above steps could be implemented like this:
func (m *HashMapper) Map(s string) ID { if s == "" { return 0 } // initialize if needed. if m.s.Len() == 0 { // initialize. m.cap, m.htab = 4, make([]ID, 4) } // find existing entry in the hashtable. mask := m.cap - 1 h := strhash(s) slot := h & mask for m.htab[slot] != 0 { if id := m.htab[slot]; s == m.Unmap(id) { return id } slot = (slot + 1) & mask } // append to s and add the resulting ID to the hashtable. start, end := m.s.Len(), m.s.Len()+len(s) if end > 1e9 { panic("stringid.HashMapper grew too big") } m.s.WriteString(s) id := ID(int64(start) | int64(end)<<32) m.htab[slot] = id m.len++ // resize the hashtable if needed. if m.len >= m.cap/2 { newcap := m.cap * 4 mask = newcap - 1 newhtab := make([]ID, newcap) for i := uint64(0); i < m.cap; i++ { id := m.htab[i] if id == 0 { continue } h := strhash(m.Unmap(id)) & mask for j := h; true; j = (j + 1) & mask { if newhtab[j] == 0 { newhtab[j] = id break } } } m.cap, m.htab = newcap, newhtab } return id }
the Unmap function would remain the same as in the IndexMapper. and the test function confirms that this works too:
apple apple 500000000 pear pear 900000005 orange orange f00000009 orange orange f00000009 apple apple 500000000 peach peach 140000000f each each 1400000010 orange orange f00000009 kiwi kiwi 1800000014 apple apple 500000000
there are couple downsides to this approach.
but even if the mapper is a bit inefficient, it's hopefully offset by the more efficient data structures this allows having.
# caveats
this sort of optimization could come at a loss of code readability. the readers now have to understand the stringid abstraction over simple strings. only do such optimizations if it was deemed necessary.
another potential optimization avenue would be to add "Lookup(s string) ID" function. it would return -1 if the mapper doesn't contain the requested string. basically it's the same as the Map() function but without the "add to the hashmap" logic. the example acl system could use this instead of Map(member). non-existent users would then not grow the mapper's data. this can be important if you don't fully control your callers.
i have to admit i never used this structure before because i never worked on a large go program where this actually would have mattered at all. but it's a nice example how could one think about removing pointers from their go code. maybe one day i'll have a chance to use this and then i can report back on its usefulness.
# edit on 2023-08-12
btw, i learned that java had a similar idea about optimizing strings too. here are some docs describing the feature:
# edit on 2024-02-04
i just noticed that go is getting a new "unique" package: https://github.com/golang/go/issues/62483. the newly proposed handles can make the string comparisons fast. but they still contain a pointer so the gc pressure remains high. my points in this post still stand.
# edit on 2024-05-09
i came across a cool blog post better explaining the pointer problem: https://blog.gopheracademy.com/advent-2018/avoid-gc-overhead-large-heaps/. the code to demonstrate the problem is pretty short:
func main() { a := make([]*int, 1e9) for i := 0; i < 10; i++ { start := time.Now() runtime.GC() fmt.Printf("GC took %s\n", time.Since(start)) } runtime.KeepAlive(a) }
now change the `*int` to `int`. on my machine the gc cycle speeds up from 300ms to 0.2ms. that's because the gc has a billion pointers less to traverse in the latter version.
and then at the end of the article it links to some go libraries implementing pretty much what i was arguing for here:
i haven't looked too deeply but they seem pretty cool! i recommend checking them if you need something like this.
published on 2023-05-06, last modified on 2024-05-09
# msgauth: let users log in with an email or a whatsapp message
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/msgauth.html.
i'm fascinated with the qr code based login where you don't have to type in anything at all except point your phone at the screen. i was thinking how could i implement such a thing myself. maybe you could just send the server the shown auth code via, say, email. this could be used for a hypothetical authentication flow on a login page. here's a demo for this:
[non-text content snipped]
tip: the native camera app recognizes qr codes automatically on modern android phones, no setup needed other than pointing it at the code. the qr code will take you to your email app with all the details prefilled, you just need to press send to authenticate. do that and the qr code should be replaced with your email address. the email is not stored in any form, goes straight to /dev/null so feel free to send it. unfortunately it might take a few seconds until my servers sees the mail.
note: the undo send feature in gmail can delay email sending for quite a while.
# idea
i have seen such qr code login pages in increasing amount of places. i think it was steam where i have seen this the first time. you go to https://steampowered.com/login, scan the qr code from the steam app where you are already logged in and it logs you in on the machine. https://web.whatsapp.com is another site with a login like this.
in my simplified demo the idea is this:
i don't think this is any less secure than username + password where you have the ability to do email password reset. it simplifies account management on the backend side as well. there's no need to deal with password or passkey storage anymore.
i suppose openid / oauth could solve this problem too but i haven't seen many providers allowing this form of login at the time of writing. and integrating with oauth is somewhat painful compared to the above very dumb protocol.
# implementation
here's how one could implement this:
to keep things simple i make some shortcuts in this demo though (such as generating the auth code on the client side).
if you want to raise the registration barrier even higher, you could authenticate with phone numbers instead of emails. those are much harder to generate for spam (not impossible but harder). but rather than dealing with telco stuff myself, i'd use something like whatsapp. it's free, relatively secure and has a megacorp behind it keeping it up. and it's faster than emails.
for whatsapp you can encode an url like https://wa.me/1555555?text=theshortid into a qr code. if you scan that (or you can even click it), then it opens up the app with the code prefilled pointed to the right contact. the user has just press send. you can ask the megacorp to forward the messages to your server via http post requests.
# alternative usecases
logging in is not the only usecase. you could use this wherever you want to have some email address on hand. suppose you allow anonymous commenting on your blog and want people to leave their email address. they might leave fake email addresses or have typos in them. instead require them to authenticate via the above described email sending protocol. it's much harder to get it wrong.
# caveats
in general passwords are prone to man in the middle attacks. you might make a typo in the website address and you get an identical website by a scammer and you won't notice it. password managers and passkeys prevent this attack or at least increases the chance of you noticing it because there will be no saved password for the scam-site.
this message based auth is also prone to man in the middle attacks. it's a bit worse: you don't have password managers in this case so there wouldn't be any protection against man in the middle attacks. but that is a feature in case you are on a remote machine which you don't really want to connect to your password manager (e.g. you are in an internet cafe).
if you have a short domain name then typo-squatting is much harder anyway. in the end this technique is a tradeoff between convenience and security. i'd advise it only for low risk accounts such as online forums, streaming services, etc.
# edit 2023-09-06
previously i had a handcrafted smtp server implementation. now i'm using cloudflare's email workers: https://developers.cloudflare.com/email-routing/email-workers/. things should be more reliable now.
# edit 2023-10-19
today i learned that passkeys do support qr code based auth to sign in on new computers. i don't know much about it though, maybe i'll look into it one day.
published on 2023-05-22, last modified on 2023-10-19
# shapeware: reward free software supporters with a say in the roadmap
what would be my motivation for donating to an organization developing something free whose product i can just freely grab? it would be nice if i'd get something extra in return. i'm thinking of various non-profit organizations behind linux, firefox, signal. here's all i want: let me have a say in the roadmap, allow me to shape the software's development a bit. let me vote on features and bugs and in exchange the organization will address the top voted issues. e.g. maybe in firefox i could vote for native vertical tab support or native adblocking support.
having the short-term roadmap public helps a lot with transparency as well. and i would be more inclined to donate to transparent organizations. i think it's also fine if the roadmap is not fully public but only available for the subscribers.
this is already happening for some software. if you donate 10 euros to the free text editor vim, you can vote on the features its author should prioritize: https://www.vim.org/sponsor/vote_results.php. though it's not clear how effective this is in vim's case.
i'm not fully sure what is the right model for transforming money into votes. i want to avoid the confusion that i have with vim for instance. after thinking of various approaches, i think this has the best properties:
i am not saying that this is the only way an organization should select its priorities. if a rich donor comes and wants a specific feature (e.g. set the default search engine to a specific site) then the organization can do that without the donor having to become a subscriber. i don't even expect that donations as described above would account to a significant amount of revenue for a big organization.
but what i'm saying is that such a software organization should always address the most wanted features too. in the end a software organization needs its users. it would be shooting itself into the foot if they ignore their voice.
i wish even megacorps who don't really need donations would have such feedback mechanism for users interested shaping the company. a rich megacorp could then send all such donations to charity to decrease people's moral friction with donating to rich companies. maybe i would feel less antagonistic towards them if they would provide me with this feedback mechanism.
published on 2023-05-28
comment #shapeware.1 on 2023-05-29
What was the trigger for writing this post?
comment #shapeware.1 response from iio.ie
a few months ago i installed the signal messenger app and i was looking for a web client for it. i have not found it nor any indications or plans whether it's coming or never coming. that made me feel uncertain about the app. then i've seen that in the smartphone app you can set up monthly donations and all you get in return is just a badge. that rubbed me in the wrong way. since then i was ruminating how this could be improved. this is the best i could come up with so far.
# scaling: scale lookups, mutations can remain single-homed
out of curiousity i did a barebones loadtest on this server (@/scaling.go). it can handle a whopping ~4 qps. that's about ~250ms per request. if i open the network tab in the browser, i can see ~250ms is spent on the tls exchange. if 100 users tried loading the page simultaneously, some of them might need to wait almost half a minute.
i think there are 3 reasons for this:
# simple solutions
switching to a modern cpu with hardware accelerated crypto primitives would probably eliminate the cost of tls. upgrading to rpi4 would give me at least 10x improvement (the tls setup seems to be around ~100ms + it has 4 cores). or i could switch off encryption. but nah, that's out of question. i have read horror stories of some free wifis injecting ads and trackers into unencrypted http. i don't want such a thing to ever happen to the readers of this blog.
if i disable https, it can sustain about 60 qps. out of curiousity, i took out my old server that i had before @/redesign. that was as barebones as it gets: accept, read first line, respond a canned response. that could sustain about 130 qps on this device. i guess that's the point where the network stack's overhead becomes the bottleneck.
note that these measurements are with keepalive disabled to simulate separate users. if i enable keepalives the request rate significantly increases because connection establishment delays are gone. so i could solve this with a reverse proxy that does keepalive across the requests such as cloudflare.
anyway, given a hardware upgrade or a reverse proxy would solve my performance issues, i'm not too worried that my secret blog can't serve multiple users simultaneously. but if it bothered me and the site would be genuinely slow, how would i scale an interactive website up? this question interests me because i have seen many other simple crud sites crawl to death under load. i have seen such a failures with university course management software, hr systems, programming contest sites, etc. what can one do to avoid such flaw in the first place for simple crud applications?
# splitting
i'd start with categorizing each request into either a lookup or mutation request. opening any blog post on this site is a lookup operation because that doesn't alter anything on the site. posting a comment is a mutation request because it changes the site.
a simple design (what i currently do with this site) is to put everything into a single server. but if that single server gets overloaded, then requests start piling up, memory starts accumulating, etc. it's game over for the service.
a key insight is that lookups and mutations are inherently different operations. lookups are easily to do parallel compared to mutations. lookups are much frequent and people want that to be fast. mutations (such as posting a comment here) occurs less often and people are more torelant if such operations are a bit slow.
another assumption i'm making is that the data the service is operating on fits into memory. there are only a handful of services where all the data doesn't fit into a dozen gigabytes. even if it exceeds, often only the metadata needs active management which then fits. the rest can be treated as blobs and managed separately using simpler services.
with that in mind, my generic advice is this:
# timestamping
how to achieve consistency? if a user posts a comment and then immediately reloads the page, how to ensure the comment appears even if the refresh request went to a different lookup server?
in each mutation response there's a timestamp. that timestamp would be nanoseconds since the epoch. they would act as sequence numbers. each mutation would be associated with a unique, monotonically increasing sequence number. precise timestamps are a great solution to that.
in the lookup server's response to the client, the server assigns this timestamp to a cookie.
when the mutation server distributes its changes, it also distributes the timestamp associated with each request. this way the lookup servers know how "fresh" their internal data structures are.
the next time a client makes a request, the lookup server sees a timestamp in the cookie. if its current freshness is older, then it doesn't immediately respond to the request. it waits until its data structures update in the background and once they are updated, the lookup server finishes the request. this way a user will never see stale data from the browser they have updated something. they just need to wait a little longer after a mutating operation.
in fact, the lookup servers set this timestamp cookie in ordinary lookup requests too. this way the user will never see time going backwards even if their lookup requests keep landing on different lookup servers.
# updates
updating the data structures is quite straightforward with @/actionlog. the lookup servers just send new log entries to the mutation server. if the mutation server accepts those, it just needs to redistribute them to the rest of the lookup servers which then apply to their own data structures.
with sql databases this might be all moot. they might already have such distributed features out of the box and you don't need to bother with all this complexity at all. but even if it's not the case, this can still be done relatively easily. the mutation server would talk to the production database. each lookup server would contain their replica in a local sqlite database. the mutation server just needs to distribute the relevant insert, update and delete statements.
i'd recommend keeping the mutation server single threaded rather than trying to deal with locking or atomic data structures. it's very easy to mess it up and lead the lookup servers into an inconsistent state. the computers are fast enough that single threaded mutation is probably enough if they are truly trivial updates as suggested above. if not, it's probably better to shard the data and mutate each shard in its own single thread.
i'd also recommend keeping the lookup servers single threaded for simplicity. but that can be somewhat cumbersome in frameworks like go which insists of having each request its own goroutine. you can try using atomic operations to update the data structures if it makes sense. try avoid read-write locking though. those locks are very complex so they are meant mostly for long operations, not for latency sensitive ones. use ordinary mutexes with short critical sections if locking is desired.
# election
the mutation server and lookup server can be pretty much the same code apart from one flag. that's pretty much the standard leader/follower design pattern. the mutation server is the leader, the lookup servers are the followers. you can start up a bunch of lookup servers and simply make one of them the mutation server.
you can have one static mutation server. but if it takes a long time to start the server because of all the data loading then restarting it means no mutations for a while. in that case try implementing hot-switching. make it possible to convert a mutation server into a lookup server instantly while some other lookup server becomes the mutation server.
then you need some leader election method. there are some quite complex methods for this but i think this could be simple enough:
# summary
those ideas would apply like this to this blog:
it's some work but once this is done, this would scale quite well. if for some reason i'd need to scale comment posting too, i'd solve that with sharding. e.g. have 16 comment acceptor servers. each post would be assigned to a shard based on the postname's hash. this should help against one hot post slowing down everything. and if the commenting is slow on that one hot post, maybe that's not so bad, people should cool down a bit.
aaaanyway, i babbled enough. most of this is probably common sense and has ready solutions on the internet. but i really needed to get this out so that i stop being bothered about this site being slow. as long as i have plans, i can sleep well, even if i won't implement them. :)
# edit on 2023-08-12
btw, i learned that https://fly.io/docs/litefs/proxy/ implements the above as a service. you set up a litefs backed sqlite database. it uses https://fly.io/docs/reference/dynamic-request-routing/ to have all non-GET requests go to the primary node while GET requests can go to any. the proxy ensures that the requests are never served from stale nodes. on each request the proxy ensures that the picked replica does not have a lower replication timestamp otherwise it waits so catch up. and with sqlite the db lookups remain local so they are pretty fast compared to traditional databases. pretty neat!
but caveat, hackernews is pretty unhappy with fly.io's reliability. but they are now in a major infra revamping that will solve all their issues so maybe it's all good now. oh, and they provide 500 free credits each month. i think that's a pretty cool way to do a free tier for a cloud service.
# edit on 2023-09-07
i'm no longer using a raspberry pi for my server. it's now on cloud, see @/rebrand and @/cloud. i no longer see performance issues.
# edit on 2024-09-13
one annoying thing with the sqlite based approach described above is that you need to manage it. either you use machines with disks or keep the database in memory and have some other means to bootstrap in case all your servers go down. oh, and you need to manage backups yourself too.
but i learned that cloudflare has a fully managed sql database at https://developers.cloudflare.com/d1/ and it seems pretty nice. it has a decent free tier and has time travel-like backup. i'll definitely consider using this one way or another if i were to build an online service.
published on 2023-06-03, last modified on 2024-09-13
# colors: color websites according to the browser-reported preference
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/colors.html.
[non-text content snipped]
[non-text content snipped]
i wish more websites would support honoring the browser's dark mode preference. it's just adding this meta tag
<meta name=color-scheme content='light dark'>
or just this css
<style>:root { color-scheme: light dark }</style>
into the html head element. it needs to be there to avoid the "white flash" while the full css is loading in dark mode. switch the dark mode back and forth on your smartphone and then you'll see the background color on this site flipping immediately. no additional magic or scripts were necessary to achieve this effect. you can set this up on desktops too if you figure out how to switch the theme. https://wiki.archlinux.org/title/Dark_mode_switching is a good reference for linux. or simply install a basic dark theme switcher browser extension such as https://addons.mozilla.org/en-US/firefox/addon/toggle-dark-mode.
here's a control with which you can switch between the themes in this post for demo purposes:
[non-text content snipped]
but in general website authors shouldn't be implementing such a toggle themselves. they should rely on the browser or the operating system having the right controls as described above.
# semantic colors
the above approach works as long as you don't get fancy and don't use colors. if you start using colors you now have to make sure they work well in both themes. but there's a way to keep things manageable. rather than hardcoding colors to specific elements, use semantic colors. don't go overboard with semantics, don't create more than 8. here's an example i'd go with:
[non-text content snipped]
[non-text content snipped]
(click on a point to remove the coloring in case it's hard to read.)
they are all a simple combinations of red/green/blue. all you need to do is to find the right shade in css. e.g. notice is yellow which is a combination of red and green (#ff0). now it just needs a light tweak to find a comfortable shade to match the color scheme. for a yellow background in the light scheme i've picked #ffc and for dark i've picked #660. easy peasy.
# avoid combinations
each semantic has a variation for both foreground and background. background is for setting the background color of some bigger elements in the ui. while the foreground is meant to highlight some elements in the ui.
[non-text content snipped]
but most importantly: avoid combining the two. if you set the background then don't set the foreground and vice versa. with that you would have a combinatorial explosion and it would be very hard to confirm that all combinations work well:
[non-text content snipped]
suppose you are trying to color a diff and within two lines you are trying to do a worddiff. avoid doing additional styling on top of an existing semantic. switch to a different semantic instead. here's an example where the in-line diff is highlighted via the notice semantic:
[non-text content snipped]
# implementation
it's pretty easy to set this up in css. by default web is "light" so you write your normal rules for that. then you add a special section for the case when the user prefers dark mode. i recommend using css variables for this:
:root { --bg-neutral: #ddd; --bg-notice: #ffc; ... --fg-neutral: #bbb; --fg-notice: #880; ... } @media (prefers-color-scheme:dark) { :root { --bg-neutral: #444; --bg-notice: #440; ... --fg-neutral: #666; --fg-notice: #ffc; ... } } code { color: var(--fg-reference); }
now the `code` elements will have the reference semantic which is blue in practice. you can look at this site's @/style.css for a complete example.
having said that, if you look at this post's css at @/colors.css, it's a bit different:
[non-text content snipped]
notice the highlighted line. that's how it implements theme switching (based on https://stackoverflow.com/a/56550819). it comes with this js function that needs to be called on startup and on theme switch:
let darkPreference = matchMedia("(prefers-color-scheme:dark)") function setTheme() { if (htdark.checked || (htsystem.checked && darkPreference.matches)) { document.documentElement.style.colorScheme = "dark" document.documentElement.setAttribute("data-theme", "dark") } else { document.documentElement.style.colorScheme = "light" document.documentElement.setAttribute("data-theme", "light") } } ... function main() { darkPreference.addEventListener("change", setTheme) setTheme() }
when the setting is at system and the user switches the system theme then the js function needs to re-run. it's possible to auto-switch without js too even if you have customizable themes but i found this is much shorter and cleaner. an earlier version of this post had such solution though. and note this js hack is not needed if you don't allow custom themes. the site's theme will switch automatically on system theme switch if you have the @media query right in the css.
but in general avoid implementing theme switching. assume the users are smart enough to install extensions if they need that. make sure the site works with those.
# caveats
try keeping colors to minimum. some people are colorblind. some people want to print your page with a black and white printer. some people might want to read something on a black and white eink screen. or some people just copy-paste stuff without formatting and colors.
if you do colors, make sure it's just a hint and your interface makes sense without colors too. e.g. suppose you have a list of operations that can either succeed or fail. don't rely solely on the color to encode this (green is success, red is failure). write out "success" or "failure" before or after each operation too.
in any case, i hope this practice of respecting browser themes will catch on.
[non-text content snipped]
published on 2023-06-15, last modified on 2024-10-08
# firefoxtweaks: profile on ramdisk is the only way to stop its constant disk io
i have a raspberry pi 4 on which i'm using firefox. raspberries use sdcards as their disk. sdcards are known to have limited lifespans. and when i look at firefox's disk activity, i see that it's constantly writing to disk. it does it even when i'm doing nothing in the browser. programs doing constant busywork are bothering me. and it is doubly bothering me when they are wearing down my hardware.
anyway, in this post i document my adventures in configuring firefox on linux in case i ever need to recreate this on other machines.
# measuring disk i/o
there are many ways to measure disk i/o but i went for a very simple way. i wrote a small go script that recursively installs inotify watches in every directory in my home directory: @/listwrites.go. those inotify watches report every file write event. it's not perfect as i don't think it detects mmap writes but it's good enough for me.
i've seen a lot of activity in the profiles directory. i've learned that firefox keeps lot of its data in sqlite files and it is constantly editing those. thanks to the file names i managed to find a few tweaks that reduced the activity levels.
# io tweaks
all these tweaks should be done in either about:config or in a file called user.js in the profile's directory. i edit user.js because then i can put it into my dotfiles repo and share it across installations. i'll post my changes as what one would enter into user.js. should be obvious how to make the change in about:config from that.
first, firefox aggressively caches web content. i don't see much point persisting its cache onto disk so i'll simply disable that:
user_pref("browser.cache.disk.enable", false);
there's still some constant writing, i think for the ability to restore my tabs in case firefox crashes. i'm not sure i can disable this, but i can bump its interval:
user_pref("browser.sessionstore.interval", 6000000);
with the above io tweaks i see reduced io but unfortunately it is not zero and i'm not sure what else to disable. this wouldn't even bother me too much if firefox would just write and then that be it. then i can simply bump the disk dirty page expiration times in the kernel settings and there won't be any disk io. but due to how sqlite works, the writes are fsync'd which forces a disk write regardless of the dirty page expiry settings.
# profile in tmpfs
since i couldn't figure out how to eliminate the rest of the writes, i decided to put my profile into a ramdisk. i simply point firefox to a profile in /dev/shm. i sync the contents to the on-disk backup every 4 hours via rsync.
i implement this via a wrapper script i wrote in go: https://github.com/ypsu/cfg/blob/master/utils/firefox.go. i simply run "firefox" and it does it's magic. it does nothing on machines where i have not set this up.
it needs some setup in .mozilla/firefox though. append ".disk" to the default profile's directory name. then create a symlink to the target tmpfs directory as the profile name. firefox will be none the wiser that the profile is in tmpfs.
there are more advanced solutions to this: https://wiki.archlinux.org/title/profile-sync-daemon. it has much more features. i'm using my own short script so that i don't need to install gazillion things whenever i move to a different machine. and i don't like running daemons anyway.
# other configuration
there are other minor annoyances like caret blinking. i often use an e-ink screen (see @/dasung) and caret blinking is wearing down my screen! and as i established above, i hate software wearing down my devices. fortunately stuff like this is easy to disable in firefox:
user_pref("ui.caretBlinkTime", 0);
i have bunch of other knobs for many random other things such as disabling search and autocompletion in the url bar, disabling warning screens, etc. all these settings are in my dotfiles repo: https://github.com/ypsu/cfg/blob/master/misc/firefoxuser.js.
i also customize my search engines. the installation script for those is also in my repo: https://github.com/ypsu/cfg/blob/master/misc/firefoxsetup.
i also install ublock origin as the adblocker because the internet is unusable without it. sometimes i also disable web workers via ublock origin. web workers allow the browser to run multiple background js threads with which it can consume even more resources, even mine bitcoins via all your cores. if you allow push notifications for a site, then the website's web worker can run even when the website is closed. dunno, it feels wrong that web workers are not permission gated. i block them by adding the following to the "my filters" config section:
||$csp=worker-src 'none',domain=~exampledomain.com
the exampledomain bit is for me a reminder how to allowlist specific websites where i would want the workers to work in the future without needing to fully disable ublock for that site. but in practice i'd just disable ublock for the site if i needed web workers e.g. for offline access.
# chrome
a sidenote about chrome. i hate chrome's approach to configuration. they try to hide most settings because they have some unreasonable fear of knobs. their android app doesn't even allow installing extensions. how would its parent company make money if people install adblock, right?
fortunately if you are willing to read the source code, you can often find some hidden command line switches for specific behavior. for example at work this is how i start chrome in order to disable the stupid autocompletion in the url bar:
chrome \ --enable-features="OmniboxUIExperimentMaxAutocompleteMatches:UIMaxAutocompleteMatches/1/UIMaxAutocompleteMatchesByProvider/*%3A0" \ --disable-features="UrlScoringModel" \ --force-prefers-reduced-motion \ "$@"
sigh. and i'm not sure this hidden setting will stay around. as a precaution i check the presence of the flag at the head version every time i start the browser: https://github.com/ypsu/cfg/blob/master/utils/xchrome. if i notice that some developer removed my flag, i can nicely ask them to put it back. i only use chrome because that's pretty much unavoidable at work for reasons™.
# wishes
i wish these things like the no-writes or simpler configurability would be better supported in the browsers. but i don't think as a user there's a way to ask for these things. the only way i can imagine is to actually work in these developer teams and then care about this and constantly push for these settings.
i think the problem is that these browser are funded by big advertisement companies and so the roadmap caters to their desires, less to the ordinary users. i wish they would be @/shapeware funded by both the users and the website operators.
edit on 2024-02-10: tweaked the chrome flags to make it work again.
published on 2023-07-01, last modified on 2024-02-10
comment #firefoxtweaks.1 on 2023-07-02
You can skip syncing to memory with overlayfs.
comment #firefoxtweaks.1 response from iio.ie
i can see how that could work. you avoid the initial sync though you still have to regularly sync back. neat nevertheless! i might play with this in the future. (fun fact, i did play with such overlay-like filesystems in the past, see @/fsbuf.)
# imtasks: my "immediate task queue notebook" helps me progress on my projects
nowadays i feel i have a relative good grasp on the ability to focus on a task and finish it. i achieve this via a plethora of tricks and habits. i wrote about these a lot in the past posts. this time i would like to describe another trick i'm using: the immediate task queue notebook.
# problem
"i feel overwhelmed! i know i want to work on project x but i don't know where to start! i don't know what to do now. let me quickly check my favorite youtube channel just to make sure i'm caught up with everything else!" [ ... several hours pass ... ] "oh, the day is over, i'm so lazy, such an incapable person!"
that's how i approached project work in the past. it wasn't very effective.
# solution
i bought a small, A7 sized notebook. i write some simple tasks into it, usually in the evening.
the rule: if there's a task in that notebook, i can't procrastinate until i cross that out. no youtube, no chat, no emails. just force myself through the tasklist until the task queue in the notebook is empty again. then i can relax a bit.
that's all to it.
# analysis
this no procrastination rule is quite annoying so i only write small tasks into it. i use a different system for tracking bigger tasks (@/task). given the tasks in this queue are small, it's more likely that i can just do them even if i feel unmotivated. but in the end that's all what matters for achieving stuff: make steady progress, no matter how small.
it's important that the notebook is small. when there's an active task in the queue and i'm supposed to be working, i simply put the notebook between me and the keyboard to constantly remind myself to clear the tasks. it has to be an old-school notebook because a smartphone or another digital tool can get distracting very fast.
an important bit is to regularly fill the task queue with small tasks. but i have a process for that too. for a while now i'm committed to avoid screen time after 9pm (see my 2023-06-03 update in @/reflecting). which means i'm freewriting (@/freewrite) and that naturally leads me to do some planning tasks for my next morning (@/inspiration).
i suppose it's not that different to the practice of attaching post-it notes to the monitor. but i prefer my little spiral-bound notebook. it's much cleaner. and the notebook's linear structure prevents me to keep adding tasks unless i cleared the previous ones. i don't flip to the next page to add new tasks until i ticked all tasks on the current page.
i have to admit, i haven't fully mastered this process yet. sometimes i still do end up procrastinating by checking emails before i get to my tasks. but in the end i do finish these tasks otherwise i would be annoyed that the notebook is front of me all the time. i'm doing this for a few months now and i'm getting better at is so yay.
i love that it forces me to make some progress every day. usually all i need is that i start with a little task and then afterwards i no longer need such crutches in the current work-session. this immediate task queue notebook is the ignition system for my brain.
# note from 2024-11-16 (more than a year after the original post)
i'm still doing this occasionally when i feel unmotivated to work otherwise. it's super effective. but yet another notebook is a hassle. and i don't take my small notebook to work either. so these days i just edit this list on my smartphone. i use google keep because it's already on my phone. any other text editor would be fine too. if i have pending imtasks then i drag the app's icon to the centre of the home screen and i put down the phone between me and the keyboard so that i don't forget about my imtasks. it's just as effective as the notebook version.
published on 2023-08-05, last modified on 2024-11-16
# schlaugh: it's a nice, update-once-daily social media site
if i ever convince myself to join a social media, it most likely will be https://www.schlaugh.com.
the key feature of it is that it updates only once per day and each member can post at most one message per day. you check it out once per day and you move on with your life. there's no like buttons, no "algorithm", no doomscrolling, no shouting matches, etc. basically social media without the addictive features.
it still has a concept of the feed though. iiuc you can subscribe to a list of tags or users. and then you can check out new posts of them once per day.
the site supports markdown and has special syntax for linking users and specific posts for cross-referencing. but otherwise it has no "explore" functionality. you learn about other users or interesting tags from other posts.
now keep in mind i have not yet actually tried the site, i just really like its rules.
i was fantasizing about "slow communication" sites before, see @/slowpals. schlaugh is a bit different than i was describing but i think it could also fill the void i was talking about in that post. fortunately i'm currently sated communication-wise. i'm forcing myself to write at least one post per month both on here and on my work-internal blog too. that pretty much satisfies my cravings for some communication. but if i ever become jobless and my todo backlog is empty, i might join the site for some additional excitement. i made this post about it so that i don't forget.
published on 2023-09-04
# rebrand: i renamed this site to iio.ie
this blog used to live at notech.ie but now it lives at iio.ie. this post explains my motivations behind this change.
# the name
6 years ago i bought notech.ie to get serious about blogging. i followed some bloggers, i wanted to be like them. i also wanted to run my own blog server just to show the world how it's done. so i wrote a very minimalistic server and served plaintext only posts. i was quite technophobic back then, and thought i will write about how to live without technology. so i named the site "no technology blog", notech in short.
but 3.5 years later i learned go. programming became fun again. i even rewrote this site to be more ordinary. and i was no longer blogging about technology avoidance. rather it became a place where i write sometimes very personal thoughts to sort things out for myself. the notech vibe was no longer fitting. i was also envying bloggers with short domains.
so the decision was made: i'll migrate to a new, short domain. i wrote a little script that queried the whois database for various letter variants of my liking to see what's available. then i've picked iio.ie since that's easy to type on both qwerty and norman layout (@/kbd). there's not much more to it than that. if i had been extra motivated, i'd have gone for a .ee tld for an even easier typing experience. but it required non-trivial bureaucracy so i didn't bother. so does .ie but i had that sorted out already for the previous domain so i didn't need to do anything for that other than reference my previous domain.
# the server
previously i ran the server at home. that worked flawlessly over the years. the only bit that annoyed me is that this way the internet, along with its army of bots, were constantly trying to hack it. i didn't really isolate the server from my home network so this was a bit worrysome for me.
on cloud i wouldn't care much about this. even if they would be able to hack the server, there's not much damage they could do there.
cloud would also provide much higher availability. if i ever move apartments again, i wouldn't need to worry too much my server not working in the meantime. nor would i need to re-create my weird setup for the server. nor would i need to worry whether my new isp is still giving me a static ipv4 address.
cloud would work for a while even after my death! i even have a plan to add a secret new post that gets revealed if i stop updating my blog for a longer period of time. i want to blog from the afterlife!
and nowadays cloud is pretty cheap. i found free solutions for all my needs using fly.io and cloudflare.com. they have quite easy apis. so thanks to these this blog lives now in the cloud for free.
i write about the technical aspects in @/cloud.
# lost features
i decided to drop one feature: gopher support. this site used to serve the posts in gopher format on port 70 too. i don't think that had any usage at all. to continue supporting that over ipv4, i'd need a static ipv4 address. that is also included in fly.io's free tier so i could totally support it.
but i want to be a nice internet citizen and not waste a whole precious ipv4 address on my silly blog so i went with a shared ipv4 address. tbf, i'm also glad dropping the gopher support. i'm creating a lot of html demos lately, gopher doesn't seem to be the right medium for this blog anyway.
i also no longer need to manage ssl certificates anymore myself. fly.io takes care of it. yay!
previously i had a smtp server implemented for @/msgauth. i replaced that with a cloudflare email worker so now i implement that with a simple http handler instead. things became much simpler, yay!
but other than the gopher thing, these changes should be completely transparent.
# no redirects
now that i have a new domain, what should happen with the old one? i don't want to keep renewing that. in case anyone looks at the old site, i want them to notice the migration and update their bookmarks. if i make the old domain transparently redirect to the new site then people won't notice until the domain is gone and then they won't know where the new domain is.
for this reason both the old rss and the old domain just print a message that the site was migrated and has no automated redirect.
# effort
i was worried this migration will take a long time to execute. but i only needed about 4 full, undisturbed days to figure out everything and finish the migration. most of the time was spent on debugging because i was trying to do dumb things. and i managed to squeeze in a few improvements to my server such as improved rss support and the etag header support. i was pleasantly surprised by the simplicity of everything.
i now try taking every fifth week as vacation to work on small projects like this. that's how i found the time for this. seems to be working perfectly.
tl;dr: the shorter domain name and the cloud migration is something i was thinking for a while now and now i'm happy that i finally managed to make the change.
published on 2023-09-07
# cloud: i recommend using fly.io and cloudflare.com for most cloud needs
as part of @/rebrand i migrated this blog to the cloud from my rusty old first gen rasperry pi. the old setup worked flawlessly but nevertheless i decided to migrate it because that's the hip thing to do in the 21st century.
# the choice
i spent ages contemplating on the cloud provider choice. oracle? google? vultr? ovhcloud? hetzner? fly.io?
i've chosen fly.io for its transparent free tier. they give you a fixed amount of free credits per month and you can spend it the way you want on their services. that's the neatest approach. even if i decide to set up other services, i think fly.io will be the most flexible. i'm aware that fly.io has bit of a bad reputation for its outages (https://community.fly.io/t/reliability-its-not-great/11253). things should be improving though. for my blog usecase i don't mind it being a bit buggy in exchange for the transparency.
but for the comments i also needed storage. using storage on fly.io makes your service less reliable because iiuc you get a block device on a specific host. your virtual machine can then be scheduled on that host only. if that host is out, the service is down which is a bummer.
so i started looking at free s3-like storage providers. i found tebi.io and cloudflare.com for this. i've chosen cloudflare after agonizing about this choice for a whole day. cloudflare comes with many other services (such as email workers which i'll also need) so i decided might as well play with that. it turned out to be a good pick.
r2 is cloudflare's s3-like storage offering. but it also has a key-value (kv) api for simpler needs. i ended using kv instead of r2.
# fly.io: deployment
the deployment is docker based. i wrote a very simple dockerfile that installs go on alpine, git pulls my blog and builds it. then it copies the binary onto another clean alpine image, git pulls the blog content and runs the server.
note that docker does lot of caching so a trick is needed to ensure that the image gets rebuilt after a git push. i use `COPY .git/refs/remotes/origin/main version` for this. see https://github.com/ypsu/blog/blob/main/Dockerfile for the glory details. i also needed a fly.toml but that was fairly straightforward.
i run "fly deploy" every time i change the code. it builds and pushes quite quickly. fly.io's cli tool is pretty handy.
i've written my server to git pull the content automatically on startup and over its lifetime so it always presents the freshest content. i don't need to rebuild the image whenever i make a new post.
# fly.io: autostopping
i have enabled auto-stopping for my server. if my server had no active requests in the past ~5 minutes, fly.io shuts the machine down. it turns it back on when the next request arrives.
this is pretty neat. my service doesn't actually need to run if nobody is talking to it. i like the energy efficiency of this.
the downside of this is that the first request is going to be slow. the vm has to start, git pull for the content must complete, i need to fetch the comments from cloudflare, and then i need to start the server. it can take up to 2 seconds and sometimes even more. but once up it's fast enough.
so far this doesn't bother me. i can very easily disable this if this starts to annoy me. see min_machines_running at https://fly.io/docs/reference/configuration/#the-http_service-section.
edit: well, this didn't last a whole day. i got annoyed by the occasional slowness. it's an always running server now.
# fly.io: idle timeouts
for both @/webchat and @/msgauth demos i need long lived idle connections. the internet claimed that this won't work: https://community.fly.io/t/is-it-possible-to-increase-the-timeout-to-120-sec/3007/5.
i had two ideas to work around this:
but it turns out this isn't an issue. i had 20+ minute long idle connections that completed just fine after the event arrived on the server side.
# fly.io: dynamic request routing
right now i run a single server. but if i want to implement a leader-follower like architecture, something i was alluring to in @/scaling, this could be pretty trivial in fly.io. i'd simply use https://fly.io/docs/reference/dynamic-request-routing/ to route POST requests to the leader. GET requests i could fulfill from any replica as soon as its state catches up with state the client has last seen (stored in a cookie).
https://fly.io/blog/how-to-fly-replay/ explains this neat feature in more detail.
# cloudflare: serverless
but on fly.io i don't have a simple solution to storage. this led me to cloudflare and its whole "serverless" ideology.
the idea is that i give cloudflare a javascript snippet and they will execute it whenever my endpoint is hit. this doesn't need to start up any x86 compatible virtual machines, just a javascript environment, similar to what a browser does. the isolation the js executors give is more than adequate for most usecases. perhaps later the wasm support will further improve and then non-js languages can be used with ease too.
i realized i could implement most of my blog as a serverless service. but @/webchat or @/msgauth wouldn't work as a serverless cloudflare worker. for that i would need to use cloudflare's "durable objects" api: https://blog.cloudflare.com/introducing-workers-durable-objects/. i really like the concept and i can totally imagine myself of using it for some stuff.
so static hosting like github pages + cloudflare durable objects would be enough for this blog. there are 2 reasons i'm not regretting my old school setup with fly.io though:
but serverless computing is something i'd seriously consider for a serious application.
# cloudflare: dns management
i pointed iio.ie's domain nameservers to cloudflare. i didn't enable cloudflare's ddos protection for my service. so the iio.ie requests go directly to fly.io.
it's an unsupported interaction anyway because i really wanted my fly.io instance to only talk https. but in order for fly.io to generate the ssl cert, it wants the domain to be pointing at fly.io's ip address. that won't be the case if the domain points at cloudflare's proxy.
https://community.fly.io/t/cloudflare-525-error-randomly-occurs/1798 explains some workarounds. basically turn off https enforcement in fly.io's proxy level and do it yourself (or not do it at all). i think the fly.io app would need a shared ipv4 address for that. then you can certainly bypass fly.io's cert management limitations. but that sounds like pain.
nevertheless i wanted to host the cloudflare workers on my domain for the completeness' sake. this way i could change my storage backend without needing to change anything in my server. so i pointed api.iio.ie to some dummy ip address (100:: and 192.0.2.1) and i enabled cloudflare for it. then i configured cloudflare to route only a specific path to my cloudflare worker. this way a silly bot stumbling onto api.iio.ie's frontpage won't eat into my worker quota.
when i initially configured my cloudflare worker to talk to my fly.io server, it didn't work. the fetch request errored out with a too many redirects error. for some reason cloudflare really insisted talking http with my server which always got a redirect response. i've fixed this by asking cloudflare to always use https. more specifically, i switched my domain's ssl encryption mode to "full (strict)" in the cloudflare dashboard.
# cloudflare: request workers
i've created a cloudflare worker and pointed a specific path under api.iio.ie to them. here's how my worker looks like implementing the some operations:
async function handleFetch(request: Request, env: Env, ctx: ExecutionContext): Promise < Response > { let method = request.method let path = (new URL(request.url)).pathname let params = (new URL(request.url)).searchParams if (env.devenv == 0 && request.headers.get('apikey') != env.apikey) { return response(403, 'unathorized') } let value, list, r switch (true) { case path == '/api/kv' && method == 'GET': let value = await env.data.get(params.get('key')) if (value == null) return response(404, 'key not found') return response(200, value) case path == '/api/kv' && method == 'PUT': await env.data.put(params.get('key'), await request.text()) return response(200, 'ok') case path == '/api/kv' && method == 'DELETE': await env.data.delete(params.get('key')) return response(200, 'ok') ...
my blog server uses this to upload, fetch, delete individual comments. i also have further endpoints for listing and fetching all comments in a single request.
cloudflare's cli tool is pretty easy to use too. i run `wrangler dev -r` to run the worker locally. it is then pointed to a test kv namespace so i can have test comments. and when i'm ready, i use `wrangler deploy` to push it to production.
it's pretty straightforward, i love it.
# cloudflare: email workers
for my @/msgauth demo i need to parse incoming emails. previously i used a hacky smtp server for that without any spoofing protections. but cloudflare has this: https://developers.cloudflare.com/email-routing/email-workers/.
the worker runs for each email received on the preconfigured address. in my case it just sends a POST request to my server to notify about the email's receipt. it's super straightforward and it allowed me to delete lot of hacky code.
# logs
both `fly logs` and `wrangler tail` allow me to stream the debug logs in real time. but neither of them offers historical access or analytics for them. on fly.io i could simply configure an external service to push them to. on cloudflare this needs a paid plan already.
but meh, i can live without logs. having access to logs would make me just obsess about them. if i need to debug something then i hope the real time logs will be more than enough. they were certainly enough when i was moving my blog to the cloud.
# github actions
oh, by the way, i am also using github actions. whenever i push to github, a github action rebuilds the backup page of this blog that is linked on the frontpage. the action also calls an endpoint on the blog to run `git pull`. this way this blog always serves fresh content without needing to re-deploy the service on fly.io.
# takeaway
it's only been a day since this blog is on the cloud using those two providers but so far i'm super happy with the results. i can't think of anything that could go wrong so i hope it stays this way.
next time someone asks me how to create an online service, i will probably suggest to look into cloudflare. cloudflare also has s3-like storage, sql database, cron triggers, etc so it should cover most needs. and if they need something for more computationally intensive tasks, then for those parts i'll recommend fly.io. i think that's a super sensible approach in 2023.
i'm sure the other cloud providers are nice too. i haven't really used them. well, i used google cloud for @/gdsnap but i found its dashboard (and its whole cloud offering tbh) just painfully overwhelming. my choices only offer a few apis but they do them relatively well.
published on 2023-09-08
comment #cloud.1 on 2023-09-07
then it copies the binary onto another clean alpine image
comment #cloud.1 response from iio.ie
oh, neat! though i use apk in the second round to install git so i think i still i need to stick to alpine.
# prioritization: take tasks from multiple queues regularly
suppose there's a team which works on a project that is broken into many tasks. also assume that new tasks are constantly coming in, e.g. as users test the project and report bugs. in what order should the team work on the tasks?
what i usually see is that project leads order tasks solely by priority. "we should be working only on the most important things!" they say.
i find that very frustrating. that approach often leaves small bugs and inefficiencies around because those are never important. over long time they accumulate lot of wasted but unaccountable time for the users. it's especially aggravating when there are many other users requesting a feature or bug report but the project team is unresponsive. the users feel like shouting into the wind.
something like this would make me a happy user:
of course avoid multitasking and only do one thing at a time. see https://apenwarr.ca/log/20171213 or any kanban evangelist talking about work-in-progress limits. but i say whenever you are picking the next task, make sure that all 3 queues are fairly represented in your selection over time.
this also brings some randomness into the project and that often leads to surprising benefits. maybe one user reported some small feature request that seemed silly but somewhat useful. but once implemented, it becomes a super popular feature and makes the whole product even more popular. nice ideas often come from the users rather than the leads who often aren't using their own product.
the point here is that it's easy to neglect user feedback over time but the above recommendation can help avoiding that trap. if "users come first", then that should be reflected in the processes.
published on 2023-10-01
# yseq: encode the creation year into the sequence numbers
suppose you are giving a regular event sequence numbers. e.g. ticket numbers, release versions, userid numbers, forum thread ids. make the first two digits follow the year mod 100.
let's take golang release numbers as an example (https://go.dev/doc/devel/release). 1.0 was released in 2012-03, 1.1 in 2013-03, ..., 1.21 in 2023-08. rather than doing it sequentially, use a year prefix and reset the counter each year:
the form of these sequence numbers would be [yy][id] where id is an increasing number that gets reset on january 1. example starting from 2023: 230, 231, ..., 239, 2310, 2311, ... 2399, 23100, 23101, ... , 240, 241 (the latter two being from 2024).
if you want to ensure the numbers are always strictly increasing then keep the length on the reset day. so in the previous example a reset after 23101 would result in 24000, 24001, ..., 24009, 24010, ... . i recommend doing this because then sorting by id numbers remains a sorting by age function. (and ratelimit the creation just to protect against accidental length inflation due from runaway ticket creation bug.)
for extra points, make the sequence numbers typo-protected using damm's algorithm. it adds an extra digit to the number in a way that it detects most fat-finger typos when entering the number. so rather than 230, 231, 232, you would have 2304, 2312, 2320 (per https://jackanderson.me/2020/09/damm-algorithm-check-digit-tool/). then when a number is entered into the system, the ui can immediately detect silly typos rather than getting the wrong data and leaving the user wondering what is going wrong. it might be an overkill for most things but it's useful for stuff where people might enter or exchange numbers manually such as employee ids or telephone numbers.
oh, and in damm's algorithm you could use a different base table (or a constant offset mod 10 on the check digit) for different systems. so you would get a different check digit for software issue numbers vs userid numbers. this would add additional layer of defense against accidentally mixing up numbers.
# benefits
in go's example the cadence of the releases become immediately clear: 2 releases per year. this makes it much easier to reason about version numbers.
which go version introduced min and max? 20? 21? i don't know. but if you said it was introduced mid 2023, my brain somehow can encode that information more efficiently because it can encode it as "this summer". this is question i ask myself surprisingly often because online programming judges are at various go versions ranging from 1.18 to 1.20 and i can never know when can i use the native min/max. when i see the year encoded 220 instead of the raw 1.18, i get much better sense of how old the judge's software is.
there's a similar benefit when it comes to ticket or issues numbers for software or forum thread ids. when you see a ticket or thread id like #123456 and it uses a year numbering scheme, you would know that it's a ticket from 2012 so it's a very old ticket or thread. you know that the information in it might be dated, it must be read with caution. i don't even need to open the thread and remember to check on the date. e.g. take https://stackoverflow.com/a/11227902 and year-encode the id to 1211227902. with the latter id it would be clear that i'm linking a 11 year old answer.
# full date
why not go full date? rather than naming the version 1.18, name it 2023-03-15.
i don't really like this. it's just too long. there's a nice advantage of the above proposal: the length of the number is determined by the cadence of the number generation. just a handful of events per year? you get a sweet 3 digit number. you have a weekly cycle? you get a still nice 4 digit number.
using the date means you need to use a string or a very hard to read number. and you can't even make two releases on the same day. i think the year-prefix is the best on the usability tradeoff curves.
however it might be interesting to add the month as an extra precision. something like floor((month+1)/2) should keep the range between 1 and 6 in order to keep this bit of data a single digit. jan-feb is 1, nov-dec would be 6. it's not too hard of a mental math. or if you have hex numbers (e.g. some sort of hashing) then adding the raw month as a hex digit after the year should work quite nicely too.
# uuids
you should apply this technique to uuids too. if your uuid is long enough, might as well include a hex encoded month too. 231 would mean 2023-january and 23c would mean 2023-december. e.g. if you are creating a image upload site or a redirect site and you create uuids like X3v44tFz then prefix with the year-month: 23bX3v44tFz. then i will now that the uid was created in 2023-november just by glancing at it.
another benefit of this is that it makes hash collisions less likely. your service doesn't need to check against an infinite database of past hashes, only against hashes from the current month.
if you have expiring data (e.g. log or event data identifiers), then adding a single digit precision for the month is more than enough. 3bX3v44tFz would make it clear enough that it's from this year's november.
see @/numids for a longer exploration of this.
# the 2100 problem
switch to triple-digit prefixes at that point. if you start the scheme today (year 2023), the first digit being 1 will mark it clearly that it's an id from 2100+. 99345 and 123456 would unambiguously mean 2099 and 2123.
another thing you can do is start the year from an offset. e.g. it's 2077 and you want to use this system. assume the baseline is 2050 and then the first id generated would be 270. doing a +/- 50 years is a relatively easy mental load. then you can use 2 digit prefixes for 73 more years before you switch to a triple digit prefix. and then people will have 100 more years to get used to 3 digit prefixes. the first year-ambigious id will be 2000 because it might mean either a ticket from year 2070 or from year 2250. but at that point surely nobody will care about the old ids. you can bump the id length just to make the old ids look way too short and immediately recognizable as old. so even if your cadence is 2 numbers per year, you would have 20000 as the first id in year 2250.
# homework
maybe this is all silly but play with the idea. next time you see a sequentially increasing number, can you find any benefit if the first two digits encoded the year the number was created?
# edits
2023-11-05: the length-resetting sequences are better matched to usecases where you have version strings such as browser extension versions. just use `yy.n` as the version string. i found other recommendations for this too: https://yearver.org. and there are projects following a similar scheme already such as https://www.jetbrains.com/clion/download/other.html. raw calver seems more popular though: https://calver.org/users.html.
2023-11-24: added the uuid section.
2024-03-01: added a reference to @/numids.
published on 2023-11-04, last modified on 2024-03-01
comment #yseq.1 on 2023-11-05
using the date means you need to use a string or a very hard to read number. and you can't even make two releases on the same day.
So in your model we're talking 5 digit numbers here. Let's say I have a directory with all releases from a year, 231.zip up to 23730.zip (worst case). How do I find quickly find the newest?
comment #yseq.1 response from iio.ie
it's the largest number that is always most recent. so 23730.zip in this case. finding this across the years is a bit trickier if you have a length-resetting sequence. in that case 243 is newer than 23720. for that you have to compare the first two digits and then the rest. but i just recommend to start from 24000 if your last year's last sequence number was 23730. this would be a non-length-resetting sequence then.
but i'm not sure i understood the question correctly. can you elaborate?
comment #yseq.2 on 2023-11-05
Sorry for the confusion, let me rephrase: If I run `touch {20..24}{0..730}.zip`, what would be a simple(!) command invocation that provides the correct logical order?
comment #yseq.2 response from iio.ie
i don't know of a simple way and that's why i recommend a non-length-resetting sequence. but if you really need something short-ish then you can convert this to a version string, use a version sort, and then convert back:
printf "%d.zip\n" {20..23}{0..15} | shuf | sed 's/^../&-/' | sort -V | sed s/-//actually if using version numbers is fine then just use that right away with the major version being the year and the minor version being the monotically increasing number. and now that i look into this, such recommendations already exist: https://yearver.org/. thanks, i've added an edit.
# nearoom: linux locks up when nearly out of memory
it's depressing that even in 2023 the linux kernel still pretty much locks up when nearly out of memory (oom). and in order to get out of it the user needs to trigger the oom killer manually. it's easy to reproduce. here's a repro but make sure you have enabled the oom killer before you try this:
echo 1 | sudo tee /proc/sys/kernel/sysrq
and then here's the crazy script:
// gcc -std=c99 -Wall -Wextra -Werror -g -o eatmem eatmem.c #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> int main(int argc, char** argv) { int limit = 123456789; if (argc >= 2) { limit = atoi(argv[1]); } setbuf(stdout, NULL); for (int i = 1; i <= limit; i++) { memset(malloc(1 << 20), 1, 1 << 20); printf("\rAllocated %5d MiB.", i); } sleep(10000); return 0; }
you have to run it twice. first time it prints you the available memory and the second time you run it with a few megabytes less to put the system into the "nearly out of memory" state:
$ gcc -std=c99 -Wall -Wextra -Werror -g -o eatmem eatmem.c $ ./eatmem Allocated 31118 MiB.Killed $ ./eatmem 31110 Allocated 31110 MiB.
keep moving your mouse around while the second command is running to see its effect. observe how your system is locked up in the second case when your system is near (but not) out of memory. even the mouse cursor gets stuck. the system restores its responsiveness once you activate the oom killer (alt+sysrq, alt+f).
i only tested this on swapless systems, not sure how it works with swap. the kernel might make some additional memory available by the time you run the second command. you might need to rerun the first command a few times to get the accurate amount of free memory after the kernel ran its cleanup functions.
this is not a hypothetical problem. i occasionally hit this problem with browsers on my old laptop which still doesn't have infinite amount of memory. happens at work too when the processes run into the container's memory limits.
# the problem
so what happens? let's assume there's no swap. in this case the kernel cannot swap out data pages because there's no place to save said data. but it can drop executable pages because those are mapped in from the disk without modification. the kernel drops a least recently used page and loads it back the next time the application uses it. but then the kernel might have selected a page that the process needs the next millisecond but now it has to wait for seconds to get it back. if it happens frequently enough the system starts "thrashing" and all the user notices that everything is slow.
at its extreme it means aggressively swapping out core functionality like "handle mouse events". that's just dumb on an interactive system i am using right now.
here's another typical scenario: suppose you have two disks and you copy a large file (larger than your memory) from one disk to the another. you run the copy command and patiently wait until it finishes. then you switch back to your terminal, text editor, browser or other interactive application.
you should notice that the application loads back pretty slowly. why? because during the copy linux just evicted every executable page in the system.
i think it's getting better with fast ssd disk sitting on the pcie bus directly we have nowadays. the executable pages load back super fast. but that's just hardware doing magic to solve inefficiencies of the software. the problem is still there in linux just getting less noticeable. but it's still there and any small snag annoys me when i know the system should be capable to be always super responsive.
and the annoying part is that the problem is solvable. but first let me go through the non-solutions.
# non-solution: disable overcommit
the kernel allows the applications to request more memory than the system has available. most pages start out as a shallow copy of the zero page. only when the application writes to the page does the kernel actually need to allocate it. this is not actually accounted as used memory until the copy is actually done.
if you disable overcommit then it will be counted as used memory even before the copy (i'm simplifying a lot here). the kernel will just return an error when an application tries to allocate but there's no memory left even though most of the usage is copies of the zero page. but nevertheless you would never run out of memory. problem solved, right? no.
there are two complications of this when the system is nearly out of memory:
so yeah, disabling overcommit doesn't work for me. i know this because that's what was the first thing i tried when i encountered this problem.
# non-solution: cgroups
another solution could be to create a separate protected cgroup for the applications likely to eat up ram such as the browser. if that eats up all the ram, it only breaks that single cgroup. it's very fiddly because you need to know what process will eat up all the ram beforehand. and the problem remains: if the browser eats up all the ram in the cgroup, the browser will still start thrashing.
or you could move sshd, x11, tmux, etc to a separate protected cgroup. this is also fiddly because you need to figure what to move and even so, the cgroup can still fill up and lead to thrashing. and you also have the headache of figuring out how to size these groups.
# non-solution: swap
i never really understood the point of swap for modern, high-end desktops. i'm sure swap enthusiasts would say you need 50% swap even when you have 93 terabytes of ram. i suppose it makes sense on low-memory systems where people might want slow apps rather than apps that don't work at all. but for me i'd rather buy more memory or replace memory hungry applications with more efficient ones. if i enable swap, things will be just slightly slow. it's microannoyances but over time they add up and just makes using computers depressing for me. snappy computer interfaces are very important for me. the system should just let me know if i want too much from it and i'll adjust.
but i think swap helps with near-oom issue in the way that the slowdown will be more gradual, not too sudden. that's all good but it's not what i want. i want things to stay in memory to keep everything snappy.
# partial solution: userspace oom killers
there are a bunch of userspace oom killers: oomd, earlyoom, etc. these are nice and extensively configurable. however it doesn't feel a clean solution. first, it constantly needs to wake up and watch the memory usage like a hawk. i don't like such daemons.
second, it doesn't really prevent the kernel from paging out executable pages. so the kernel can still page out executable code. the large file copy from above might still trigger such evictions.
# partial solution: mlockall
another solution i considered is simply to mlockall each process. if the kernel cannot drop pages, it cannot start thrashing. let the builtin oom killer kill the largest process then, i'm happy with that. mlocking definitely helps. but there are some edge cases here too.
an application might do a large allocation and relies on the fact that the actual usage happens only when it starts using the memory. lot of memory allocators work like this, including go and java i believe. with mlockall(MCL_CURRENT | MCL_FUTURE) the kernel would pre-fault all pages resulting on excessive memory usage for zero pages. so MCL_CURRENT+MCL_FUTURE on its own is not enough.
but nowadays the kernel has an MCL_ONFAULT too. it will lock stuff in memory only once it was faulted in. it addresses the "allocate memory for everything, even the zero pages" problem with mlockall i mentioned above. but now you still have to run this syscall for every process you have. you need to continuously gdb into the processes, call the syscall, then detach. it's a very unclean solution and requires a daemon continuously doing that.
a bit cleaner solution is to not gdb into those processes but look up their mapped in files and lock those into memory. with the mincore() syscall you can even find the mapped-in pages and lock those rather than locking the whole file into memory. however unless the daemon is aggressively monitoring the processes, it might take a while before it detects that a memory hog process exited. cleaning up those locked in files might take a while and in the meantime you might not be able unmount disks and that sort of complications.
# potential solution: inheritable mlockall
it would be nice if mlockall would could be inheritable. i found an old patch for an MCL_INHERIT+MCL_RECURSIVE option: https://lwn.net/Articles/310168/. i think that would do the job but i don't think it ever made to the mainline kernel. i see https://lkml.iu.edu/hypermail/linux/kernel/0812.1/00299.html rejected the patch because such attribute inheritances across processes are too "surprising".
the counter-recommendation was to implement mlockall on a cgroup level. well, that too would be fine by me. i haven't found an implementation for that though.
# potential solution: kernel tunable for not dropping mapped pages
iiuc currently linux has a simple least-recently-used algorithm to pick a page to drop. that alone is not enough. do this in addition: if a file backed page was touched in the last x minutes, simply don't drop it no matter what. if there are no other pages to free then just trigger the oom-killer and call it a day.
x can be a tunable. if you set it to 5 minutes, then the "mouse cursor not responding" thing i mentioned above cannot happen. it still allows background applications to be swapped out. i'd set it to a infinite value though.
but in case it's hard to measure a page's last usage, then i'd be more than happy with a binary "never unmap" option too. i.e. let me set vm_swappiness to -1 to make the kernel never unmap memory.
# final words
there's a lot of chatter on this on the internet. https://github.com/hakavlad/nohang is a good entry point to the problem space. at the time of writing its readme links to many other explanations of the problem, discussions, and other solutions.
after writing this post i found https://github.com/hakavlad/le9-patch which is different approach for a kernel tunable to solve the problem. i think that would work too, i just need to set the new tunables to a very high value. i wish it was in the mainline kernel.
in summary all i want is a responsive system under all conditions and linux currently cannot give it to me. it's more important for me than "stability". unfortunately i'm not familiar with kernel development nor how to file bugs for it. so this feature request just remains a dream. these days i have plenty of memory, use efficient applications, know how to trigger the oom-killer on demand so i can live with this bug. maybe i will look into this when retired.
published on 2023-12-03
comment #nearoom.1 on 2023-12-03
It sounds like all your problems can be solved with cgroups. On that note, I don't really follow the argument in your "containerization" section. Can you elaborate with a bit more detail?
comment #nearoom.1 response from iio.ie
oops, i meant cgroups not containers. updated. hopefully my problems with them are clearer now. but maybe i missed something. how would you solve this with cgroups?
comment #nearoom.2 on 2023-12-03
You run risky commands with resource limits (trivial with systemd-run). Orthogonally, you can set memory.low and memory.min for stuff that should keep running no matter what. Any reasonable DE already has a hierarchy set up anyway (e.g. https://lwn.net/Articles/834329/).
Additionally, if you want to keep executable pages cached swap will make your life a lot easier! See also https://chrisdown.name/2018/01/02/in-defence-of-swap.html for additional reasons.
comment #nearoom.2 response from iio.ie
re swap: the tldr literally says "Disabling swap doesn't prevent pathological behaviour at near-OOM". i don't really want "help" or "improvement". i want a "fix".
re limits: i'm not really excited about trying to come up with a limit for each command i might run (everything is risky for me). who uses a desktop like that? and i don't see how it prevents thrashing. even if i put my browser in a cgroup, the browser itself can still thrash and become unresponsive due to the operating system becoming slow.
re desktop environments: are you using one of those (along with swap)? so if you run my repro from above, your system doesn't lock up? and if you run it in the same cgroup as the browser, the browser doesn't lock up? (this latter example would simulate the browser eating up all the ram in its cgroup.)
the more i think about it, the more i think le9-patch is the right approach. that approach should work well alongside the existing facilities such as cgroups and swap. it's a limit in a new dimension.
# uses: a list of tools i use
i like the idea of https://uses.tech, might as well do such a page for myself. so this is a list of random stuff i use or recommend. i don't list everything, only stuff i deem worth mentioning. the items are in alphabetical order.
software:
hardware:
services:
notes for my future reference:
published on 2024-01-03, last modified on 2024-12-13
comment #uses.1 on 2024-01-04
check out https://usesthis.com/
comment #uses.1 response from iio.ie
interesting. i now also found https://uses.tech/. i've renamed the post to /uses to conform, thanks for bringing this to my attention.
# wingaming: windows can be used without license for some tasks just fine
a while ago i bought borderlands 3 + dlcs very cheaply in a steam sale. it's a nice game. this winter i had some free time to play it. but in order to get the max framerate out of it, i decided i'll use windows rather than running it in linux with steam's proton.
so i downloaded the official windows installer and installed windows 10 on my 10 year old gaming machine. it was hibernating in the basement for the past 5 or so years. i tried windows 11 too but that didn't work because my machine didn't have tpm chip for secureboot or something. i think. the installer just said "your machine does not meet system requirements". (i don't get why developers cannot make the error messages more useful. include the requirement not met, sigh.)
anyway, windows 10 worked fine but until you activate it, you see a "activate windows" watermark in the bottom right corner. and you cannot customize the desktop such as changing the wallpaper either. but otherwise it's a completely functional windows that you can use freely!
i had a valid windows license at some point but i lost the details and i'm pretty sure i wouldn't be able to use on newer windows anyway. anyway, it makes no sense for me to pay the full license fee just to use the system for a week and then never again. i wouldn't want to use windows as day-to-day operating system anyway. windows is goddamn slow compared to an equivalent linux. i had to wait seconds for the menu to appear when i right click on the desktop. (i installed it on a spinning disk, not ssd but that shouldn't make such a simple operation this slow.)
but anyway, the "activate windows" watermark is annoying because it appears even when you run games full screen. if you can live with it then that's it, game away. but for my own future reference let me document how to get rid of it:
that's it. i've looked other ways to get rid of the watermark such as regedit hacks but they didn't work. then my vacation time ran out, the machine went back to the basement. it would have made no sense to buy a license just for a few days. and if i would have needed one then i would have just accepted the small framerate loss and played it in linux. so this wasn't a lost sale anyway.
(also what's up with operating systems blasting ads into your face right after installing it? i mean they appear in the start menu, in the default browser's starting page, the default search engine's page, ads ads ads everywhere. and people pay for this? i'm very sad about the state of affairs of computers.)
published on 2024-01-20
# titles: omit periods and uppercase in title-like contexts
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/titles.html.
consider these contexts:
most of these (but not all) should need only one sentence. do you put a period after them? how do you decide? here are my rules:
and furthermore: if you don't need a period, you shouldn't need uppercase either! so a book title or the first line of a git commit should not start with uppercase! go error strings are like this. no periods, no capitals: https://google.github.io/styleguide/go/decisions#error-strings.
consider these things like sentence fragments but not a full sentence. "topic phrase" might the correct technical term for these, dunno.
i'm well aware that the lowercase ship has sailed a long time ago: people are used to uppercase way too much. but here's a trick for git commits and issue titles: use the "$module: title" pattern. think of "module" as a go module where the style is to use lowercase. then the lowercase style will be more natural, easier to swallow for others. e.g. you are adding a new a string conversion method to the standard strconv library: "strconv: parse integers with Ki/Mi/Gi suffixes". or if ui animations in a product are distracting, you can open a github issue titled like this: "ui: allow disabling animations". look, no uppercase needed!
also notice that it's easy for me to put the sentence's ending period after the closing quote when i am quoting these titles. i know that's not the official english rule but my ocd tells me that the period must be after the closing quote. moving the trailing period out of the quotes is just extra nuisance when concatenating these strings programmatically. on the other hand i really don't like two periods like this: "some title with period.". no such problem when the titles contain no periods.
[non-text content snipped]
i didn't find much discussion about this on the internet hence thought i come up with my own rules for myself to apply in my life.
here are some discussions i found. i'll add better ones if i find them:
edit 2024-08-18: https://www.conventionalcommits.org/ mentions a similar trick as above for git commits and all their examples are lowercase. yay!
published on 2024-02-03, last modified on 2024-08-18
# numids: yearstamp numeric unique ids too
this is a followup to @/yseq but for random numeric ids.
consider the unique ids that are used in urls such as in reddit urls or the youtube video ids. these are strings of alphanumeric characters. that gives great flexibility but strings come with some performance downsides in most programming languages. an int64 id in comparison is pretty easy to use, fast, and doesn't generate pressure on the garbage collector. and if a user ever needs to enter an id manually somewhere on a keypad, digits are always easier to type than strings (example: credit card numbers or bank account ids). i have a soft spot for int64 ids and prefer using them over strings in most cases.
there's a small caveat to that: javascript doesn't have int64s but only floating point numbers. so to ensure javascript never garbles the id, it's best to keep the id value less than 2^50 or so. but that should be still good enough for most cases. and there's no need to worry about accidentally generating a naughty word with integers.
on the flipside int64 ids can have high rate of collisions in the case of high rate of id generation. so relying int64 might be a bit risky but for posts and userids in small forums, issue tracker ids, it's more than enough. another downside could be that int64 ids are more "guessable" but this probably doesn't matter much for forum post or issue tracker ids.
# id length
how big should the id be?
i really love short ids. if the id is short, i can even remember it. e.g. if in my project a contentious issue has a memorable 4 digit id, i might remember it and look it up directly via id rather than always searching for it.
context: i love to type urls from memory perfectly. i never rely on autocompletion or history completion. i have relatively good memory for this. some websites handle this quite well thanks to their simple url structure. some are terrible. but if i create a website, i want it to have a simple url structure.
keep the id length short if the system doesn't generate lot of ids. but do vary the length. some ids should be 5 digits long, some 7 digits. this way nobody can rely on a specific length. furthermore the id length can simply grow if there are many collisions during generation. this way the system handles an increased id pressure gracefully.
perhaps distinguish id length for humans and robots. if an alerting system creates automated tickets, give those tickets long ids. this way robots don't eat up the short id space that humans prefer.
# yearstamping
in @/yseq i explained my love for putting some date information into the ids. the same can be done here too. append the last two year digits to the end of the id. so an id like 12323 mean it's an id from 2023. or use the last 3 digits if worried about the year 2100 problem. e.g. 123023 for an id from 2023.
it needs to be a suffix because the id length is variable. putting it at the end means both the generation and extraction of this piece of data remains trivial programmatically.
yearstamping also reduces the chance for collisions. a new id can only collide from other ids from this year. this can make the uniqueness check a bit faster.
it also allows the administrators operate on old ids easily. for instance they can use a glob like "*23" to select all ids from 2023 for archiving.
# weekstamping
in case you are doing full alphanumeric ids, then you can easily weekstamp too. just use A..Za..z for the week at the beginning (starting with capitals to make it easily sortable). that character set is 52 characters long, almost the same amount as the number of weeks in a year. just use lettertable[min((yearday-1)/7, 51)] to sanely deal with that pesky 53th week. you can also prepend the year number. the length of the year is no longer a problem because the weekstamp is a letter so you know where the year ends. no year 2100 problem this way. so an id like "9qdQw4w9WgXcQ" would mean an id from 2009, week 43. or an id like "16XXqZsoesa55w" would mean in id from 2016, week 24. or an id like "123Cabc" would mean in id from 2123, week 3.
sidenote: you can keep 64 (or 50) bit long ids even if you present the ids as string to the user. you can do this if you format the numeric id as a 26+26+10=62 base number when presenting it to the user. then you can have best of both worlds: short ids + lightweight representation in code.
# comparison to yseq
the downside of @/yseq is that the id length must remain static if the users want to use it to compare events chronologically via the less-than operator over the id numbers. no such length restriction on random ids because such comparison intentionally doesn't make sense. with sequential ids users often try to farm sequential ids to grab the round or nice numbers. no such incentive with random numbers.
go with the random ids unless there ids need to be able to express a chronological relationship between them. use an int50 id if you don't expect to need many ids (e.g. less than a million per year).
# edits
published on 2024-03-01, last modified on 2024-03-22
# postreqs: make http post requests via javascript
if i have a web form such as a login page (username+password) or a comment box then i try to use the following pattern:
contrast this to the traditional approach where the redirect/reload always happens on form submit. i think the in-page approach has a much better user interface properties than reloading the whole page with the result of the http post request. and i believe the above is much easier to implement than the traditional approach. the http post endpoints can remain pure api endpoints that a few lines of javascript can handle.
furthermore errors like overload are much easier to handle gracefully. on a traditional page the user must continuously retry and hope for the best. and this often results in duplicate posts. the javascript approach can automatically retry with some fancy retry algorithms. all while keeping the web page responsive and the user well informed about the status.
the downside of this approach is that it requires javascript. that's fair game nowadays if done reasonably. i think it's reasonable to avoid catering to the lowest common denominator. rather make the whole website and its content also accessible via an api so that it's easy for the users to write custom frontends. rely solely on the same api for the official frontend. this ensures that if you ever go overboard, users should be able to respond by writing a better frontend. make replacing your site easy rather than making it artificially important. that's how you can signal trust and its a form of a long term commitment (@/commitments) to be a good guardian of whatever data the users trust you with.
(speaking of responsive ui, here's a good overview what latencies we should be targeting: https://www.nngroup.com/articles/response-times-3-important-limits/. ideally a site's response is so fast that the user doesn't even notice step 3's feedback at all.)
published on 2024-03-09
# tokengen: token generator for media access
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/tokengen.html.
don't ask.
[non-text content snipped]
published on 2024-03-16
# abnames: create new names from abbreviations
software engineers need to create new terms all the time let it be for tools, services, packages, modules, etc. the name matters a lot: everybody will be referring to the new entity using the given name in forum comments, code variable names, filenames, etc.
suppose you are creating a "production logging service" in your company. will you call it production-logging-service? that's quite long and the presence of dashes creates problems when used in CamelCase languages such as go. and you can't use dashes in variable names in non-CamelCase languages either. there you would need to use production_logging_service. you can no longer search for production-logging-service to find all related usage, you would need to do a case insensitive search for "production.?logging.?service". that would then find both ProductionLoggingService and "Production Logging Service". and it takes long to type in too. it's a hassle. avoid multiword names.
another approach is to come up with a completely new, unrelated but cool sounding letter combinator such as "broxus". (i just made that up, any similarity to real entities is purely accidental.) this approach is dumb because the name is super hard to remember especially when you have a lot of such obnoxiously named services.
a third approach is to name them after some mythological entity that did something related. e.g. "herodotus was a greek historian that logged the ancient history" so let's name the service herodotus. it's a bit better but still silly. i have very bad memory for historical figures so such associations would be very hard for me to maintain especially when you have a dozen services named after ancient greek people.
a fourth, my preferred approach is that you take the reasonable sounding long explanatory name and create a short, easy-to-pronounce abbreviated name from it. so i'd name "Production LOGging Service" as "plogs". it must be easy to pronounce too. i have very good memory for this sort of naming. my mind can very quickly remember to break that name into "p-log-s". from there it can quickly associate to "production logging system" and boom, i know what service i'm reading about once i've seen the rule. and if it's unique enough then searching for documentation about the service will become a walk in the park.
there's one requirement for this: make sure these are documented. if you have a large project, then have a glossary that explains most of the commonly used abbreviations. and don't go overboard. only name big components like this, not every function.
even if you can't come up with a good name, a simple abbreviation is often better than using the full name or an unrelated name. that's how we got html, css, sql, png, gif etc and they ended up being quite usable in terms of searchability at least.
https://news.ycombinator.com/item?id=39299974 lists some nice examples for programming languages:
i keep doing this on this blog to keep all post urls short. even in this one: "abnames" means "abbreviated names". i write most of my posts to myself as a reference of my thoughts and opinions and i do revisit some posts regularly. it's super convenient to have a short, relatively easy to remember url to type.
published on 2024-03-23
# aclsystem: encode user and group names into ids to keep acls simple
caveat emptor: this is another fantasy posts where i think how would i design a system based with zero experience in such systems. usually i daydream about being a superhero but today it's about acl systems in a small/mid sized tech company.
suppose you have files in a filesystem, tables in a database, tickets in an issue management software, documents in a content management system, etc. you want to make it configurable which users can access the entities in your system and how. you could have a couple capabilities or access control lists (acls) and for each acls a list of groups or users who have that capability. examples:
suppose you own a file and you want alice to read it. all you need to do is to add alice to the read capability's list. easy peasy. though note that this isn't representible in the standard posix file permissions model. i think that's a very inflexible model and the above is more powerful. these lists don't have to be unbounded. even if you bound them to 4 entries, you already have a quite flexible system.
# ids
how do you represent these acl lists? ideally each user and group in your system has an int64 associated. then each acl is just a list of int64. that's a more compact representation than storing these as list of strings.
how do you map a username to an int64 and vice versa? one approach is to have keep a database around that contains the string<->int64 mappings. but that's overkill! there's a much simpler approach if you accept some limitations.
limit usernames to the form of "[basename]-[suffix]". basename can consist only of at most 10 letters (no digits or underscore allowed). suffix can be one of 8192 hardcoded suffixes.
you can encode one letter out of 26 in 5 bits (2^5 = 32). 10 such letters means you need 50 bits. you can encode one suffix out of 8192 in 13 bits. now we have a 63 bit long number.
there's one bit left: let's use that whether we want group expansion or not. if the id is negative, then username doesn't refer to the user itself, but to a group expansion that is looked up in some other system.
# id mapping example
let's encode 'a' as 00001, ..., 'z' as 11011. and to make the implementation of encoding/decoding simple, store it in reverse. so encode "alice" as "ecila".
that would be the int64 id for those users. the implementation is simple, to decode you would need something like this in go:
name := "" for ; len(name) < 10 && id&31 > 0; id >>= 5 { name += string('a' + id&31 - 1) }
encoding is similarly simple if the name already meets the limitations.
encoding names like acme-team, login-service, politics-discuss, accesslogs-readers can be done via the suffix logic. you just need a builtin constant map like this: 1-team, 2-service, 3-discuss, 4-readers, 5-group, ...
"politics" translates to 656379523568 and the suffix code for -discuss is 3 so 656379523568 + 3<<50 = 3378356100051440 is the id for politics-discuss. this could be a group that holds all the members subscribed to mailing list called politics-discuss.
to express all members of politics-discuss, use the id of -3378356100051440. note the negative sign. the member expansion would be provided via some external group expansion service.
# acl example
suppose alice has a file that she wants to share with her manager bob and the lawyers-team.
using numbers this translates to this:
checking if a user can read the file consists of two steps: the user's id is in the readers list? it is allowed. if not, then the system needs to group expand each group reference. this is more expensive but with some caching this could be a fast enough operation.
# the suffix map
the list of suffixes would be carefully selected to express common ideas. e.g. many tools and projects want to have a mailing list to discuss it so many teams would like a group with a -discuss ending name. so it makes sense to have that as one of the possible suffixes. this map can grow over time. but each addition must be carefully vetted for usefulness. there are only 8192 possible suffixes, it can run out very quickly if you allow users to register them without oversight.
the suffix map would be embedded into each application as a constant. this means that there's some delay until a new suffix is picked up in all applications. this shouldn't be a problem because most applications only care and communicate via the int64 ids. the map is only needed when the application wants to do a conversion between the id and the human-readable name. but even if the map is not updated, it can just use the raw id as a placeholder.
so decoding 3378356100051440 into politics-3 should be reasonable enough. similarly if an ui wants to encode politics-discuss into an id but doesn't know the id for -discuss then the ui simply returns an error. then the user can enter politics-3 and that should work too.
# namespaces
if it makes sense, you might sacrifice one (or more) bit from that bitmask for namespaces. suppose you are a web company and you have your internal employees and external users. you want to assign ids for both. use this new bit to decide whether an id is for an internal user or an external one.
if it's internal, you will have a selection only from 2¹²=4096 suffixes. if it's external, then the remaining 12 bits could be used differently than suffixes. maybe use it for +2 letter long usernames, 12 letters in total. or have 5 bits (0..31) for suffixes in case your website allows users to form groups (-discuss, -members, -announce) or implement bots (-bot). and then the remaining 7 bits (0..128) for yearstamping with the last two year digits. so if a user registers in year 2024, they get a username like alice24. other users can immediately tell how fresh a user is and prevents account reuse. see @/yseq for other benefits why yearstamping ids in general is good. the internal username decoders can then distinguish between internal and external users solely based on the fact whether the basename part of the username has numbers or not.
# abnames
the 10 letter, no digit restriction can be quite painful. for human usernames that might be fine, nobody likes long names anyways.
for service roles and product names it might feel more limiting. but judicious use of the @/abnames philosophy can give plenty of short names. these short names don't need to be perfect. the abnames come with a glossary so the user can easily look up the full, human readable name of the product.
in fact most user interfaces should provide a popup window which on popup that explains the details of the role including the full product name. such feature is also useful for human usernames: to see the full name, the profile photo, responsibilities, availability, etc.
# humans vs robots
often there's a desire to distuingish between humans and robots. for example in the above hover-popup-box example a system could look up data differently for humans vs robots. for instance the popup box wouldn't need to try to look at calendar availability for a robot. another example would be enforcing a human-review rule: each commit must be reviewed by a human. in that case the review system would need to be able to tell if an entity is a human or not.
to make this simple, use the following rule: the empty suffix means humans. in other words if a username contains a dash, it's not a human. robots can use a -bot or -service suffix.
i'm not fully sure about the usefulness of this rule because i really like short names. and i can imagine there would be some bots where a short name would be useful. but i think the value of easily recognizing fellow humans in our complex systems is getting more and more valuable so i think it's worth it. this way you can easily tell which one is human between alice and alice-bot.
# groups
i recommend keeping group membership data in version control. you could have the following configuration:
the g/ prefix in "g/acme-team" refers to expanded group. so login-service will contain alice and bob as members.
the group definitions need to be expanded recursively. so accesslog-readers would contain alice, bob, and charlie. this means the group membership lists must be acyclic.
tracking human memberships in a version control for a mailing list like politics-discuss would be overkill. so track groups with high churn (such as memberships for mailing lists) differently, e.g. in a database and have the users join or leave via an ui rather than editing text files.
then create a service that serves these group expansions. make it possible for clients to fetch all members for a group and then watch for updates. this means membership lookup remains local in the client and thus fast.
tip: log every time you look up a member in a group as part of making a decision on access. log it with reason, example:
func IsMember(group, user int64, reason string) bool ... acls.IsMember(acls.Id("accesslog-readers"), acls.Id("alice"), "raw access")
log it into a central logging system where users can later look up which memberships users actually used and when was a membership last used. such information will be super useful when trying to lock down privileged groups. eventually you will need such information so it's best if the system is designed with this in mind right away.
# special groups
to make expressing some things easier, create a couple special groups:
the expansion of these groups would be handled in the lookup logic specially: no lookup would be needed.
# management
it makes sense to associate some metadata with users, roles, and groups. e.g. for roles you could configure the full description of the role, the 4 byte linux uid_t, etc. for groups you would configure whether it's a mailing list or not, whether humans can join on their own via an ui, etc.
suppose you have a version control system with per directory access control. then create a directory for every admin team wishing to manage groups and put their roles under them. then all modifications in the files have to be approved via that admin team.
example:
# plogs-admins/plogs-admins.txtpb description: "group admin management team for plogs (Production LOGging System)." members: [ "alice", "bob", ] # plogs-admins/plogs-discuss.txtpb description: "mailing list for plogs (Production LOGging System) related topics. anyone can join." group_join_mode: "self-service" mailinglist { moderators: ["alice"] readers: ["g/all-special"] } # plogs-admins/plogs-backend.txtpb description: "service for clients wishing to upload production log entries into plogs (Production LOGging System)." vm_management { linux_uid: 1234 vm_admins: ["g/plogs-admins"] } # plogs-admins/plogs-frontend.txtpb description: "service for users wishing to browse the production log entries in plogs (Production LOGging System)." vm_management { linux_uid: 1235 vm_admins: ["g/plogs-admins"] }
then create a service that serves this metadata for other systems. so when the mailserver receives an email to "plogs-discuss@example.com" it can check this service whether it's indeed a mailing list. if so it then asks the group expander service for the members and forwards the email to them.
an edit from 2024-12-07: an alternative idea is to have per basename file and define each suffixed group in it:
# plog.textpb description: "Production LOGging System: service that that indexes events from production systems" groups { "admins": { description: "admins" static_members: [ "alice", "bob" ] } "discuss": { description: "mailing list for plogs (Production LOGging System) related topics. anyone can join." group_join_mode: "self-service" mailinglist { moderators: ["alice"] readers: ["g/all-special"] } } "dev": { description: "the developers who develop new features in the codebase" static_members: ["charlie", "dave"] } "backend": { description: "service for clients wishing to upload production log entries into plogs (Production LOGging System)." additional_admins: ["g/dev"] vm_management { linux_uid: 1234 vm_admins: ["g/dev"] } } "frontend": { description: "service for users wishing to browse the production log entries in plogs (Production LOGging System)." additional_admins: ["g/dev"] vm_management { linux_uid: 1235 vm_admins: ["g/dev"] } } }
the "admins" group is a mandatory group with static members that describe who can approve changes related to this "family" of roles. whenever a change is made to such a file and robot could look at the difference. it would allow committing only if the commit has approvals from at least two admins. if plogs-discuss is changed then 2 approvals are needed from "alice" and "bob". but if plogs-backend is changed then 2 approvals are needed from "alice", "bob", "charlie", "dev" thanks to the role's additional_admins setting.
# disclaimer
i admit, i'm not sure i'd design a real system exactly like this. 10 letters can be quite limiting. this system doesn't scale up to millions of employees creating millions of microservices each with a different username. the names will become very cryptic very fast. but if the company has less than thousand users in its system, this should be a pretty simple way to manage things. i like the simplicity and compactness this design requires so it could be fun to play around with in non-serious environments.
published on 2024-04-01, last modified on 2024-12-07
# statusmsg: use status messages instead of percent done indicators
in @/postreqs i linked to https://www.nngroup.com/articles/response-times-3-important-limits/. it mentions slow user interface actions should have a percent done indicator. i disagree with that. i do agree that some form of feedback must be given, i just disagree that it should be a percent done indicator. percent done indicators have places where the progress is very steady such as file downloads. but for many operations (e.g. game loading screens) percentages are terribly unreliable. but even in the download case i'd just prefer that the interface tells me a detailed status instead: size of the total transfer, already transferred data, speed, and the estimated completion time.
the application should be honest and tell the user the actual operation being done at any given moment. e.g. in a game loading screen it could just print that it's loading files (+ which file), it's uncompressing, compiling shaders, etc. if users complain about slow loading, they will also report which step is slow which will simplify debugging and optimization efforts. e.g. they complain about shader compilation? then it's clear that precompiled shaders would be a nice investment. avoid silly "reticulating splines" type of joke messages. that won't be useful for anyone.
print only the current action at any moment. don't bother keeping the full status history. at least don't print the history in the user interface. it's nice to keep them in logs but the user interface should be clutter free.
this is pretty easy to implement on webpages. just have a "status" element somewhere on the page and update it like this:
<span id=status></span> ... // send login request via http post. status.innerText = 'logging in...' fetch(...) ... // redirect to the login landing page after a successful login. status.innerText = 'login success, loading frontpage...' window.location.href = '...' ... // clear status message when user starts editing the form. status.innerText = ''
it is similarly easy in command line tooling (go example for linux):
// setStatusf writes the passed-in single line status message to stderr. // subsequent status writes update the previous status. // use setStatusf("") to clear the status line before printing anything to the screen. // avoid putting newlines into the status message because it breaks the clearing. func setStatusf(format string, args ...any) { // extract terminal width per https://stackoverflow.com/questions/1733155/how-do-you-get-the-terminal-size-in-go. var winsz [4]int16 r, _, _ := syscall.Syscall(syscall.SYS_IOCTL, uintptr(os.Stderr.Fd()), uintptr(syscall.TIOCGWINSZ), uintptr(unsafe.Pointer(&winsz))) width := int(winsz[1]) if r != 0 || width < 10 { // not a terminal or too narrow. return } msg := fmt.Sprintf(format, args...) if len(msg) >= width { msg = msg[:width-6] + "..." } fmt.Fprintf(os.Stderr, "\r\033[K%s", msg) } func printFeed() error { setStatusf("looking up dns...") addr := dns.Lookup("example.com") setStatusf("fetching feed...") feed := rss.Fetch(addr, "/rss") setStatusf("parsing feed...") parsedFeed = rss.Parse(feed) setStatusf("") fmt.Println(parsedFeed) return nil }
the "\r\033[K" terminal escape sequence combination means to go back to the beginning of the current line and clear everything from the cursor. this only works if the previous status message didn't contain any newlines, hence the warning in the doc comment.
note that this is printed only when the tool is used interactively. as a user i would be delighted to know what is happening when i'm waiting for a tool to finish. it makes debugging much easier when things go wrong.
suppose i noted that the dns lookup succeeded but then the tool got stuck in the "fetching feed..." step. at this point it will be clear to me that it's probably the website that is having problems rather than my networking setup.
this is not needed if the action or tool is very fast, only when it's normal that it can take more than a second. e.g. when there's networking involved.
also note that the above code examples are optimized for the occasional status updates. if you have a rapidly updating status (e.g. loading many files), then a polling approach is better to reduce the load on the terminal:
var status atomic.Pointer[string] // displayStatus keeps displaying the value of status until it becomes empty. // once empty, it writes true to done to signal that the status line was cleared. func displayStatus(done chan<- bool) { const updateInterval = 500 * time.Millisecond defer func() { done <- true }() lastStatus := "" for { // extract terminal width per https://stackoverflow.com/questions/1733155/how-do-you-get-the-terminal-size-in-go. var winsz [4]int16 r, _, _ := syscall.Syscall(syscall.SYS_IOCTL, uintptr(os.Stderr.Fd()), uintptr(syscall.TIOCGWINSZ), uintptr(unsafe.Pointer(&winsz))) width := int(winsz[1]) if r != 0 || width < 10 { // not a terminal or too narrow. return } msg := *status.Load() if msg == "" { fmt.Fprint(os.Stderr, "\r\033[K") break } if msg == lastStatus { time.Sleep(updateInterval) continue } lastStatus = msg if len(msg) >= width { msg = msg[:width-6] + "..." } fmt.Fprintf(os.Stderr, "\r\033[K%s", msg) time.Sleep(updateInterval) } } func setStatusf(format string, args ...any) { s := fmt.Sprintf(format, args...) status.Store(&s) } func example() error { setStatusf("starting...") done := make(chan bool) go displayStatus(done) for i := 0; i < 3000; i++ { setStatusf("doing action %d...", i) time.Sleep(time.Millisecond) } setStatusf("") <-done fmt.Println("done") return nil }
the status updater is now a background goroutine. it wakes up twice a second to look up the current status and print it. this approach avoids spending too much time in the write syscall printing status updates that the user wouldn't even have a chance of reading anyway.
there's another nice benefit of having such a global status variable even if you don't print it. you could periodically sample it and then you would get a nice profile what your application is doing. an ordinary code profile would only tell you which code is running but this could tell you which file takes the longest to load. or if you have a crash, the status global could give you additional debug data on what was happening at the time of the crash.
anyway, now go forth and add status messages to all the slow tools and interfaces!
published on 2024-04-08
# signups: allow signing up for web services only via invite or via payment
imagine creating a discussion forum or community site like reddit or twitter from scratch. if the site is popular and allows free registration then that creates huge amount of work for the moderators to keep spam at bay. it would be a battle that the site cannot really win.
what's a good deterrent against this? the simplest approach is to ask for some one time registration fee, like $10. if a spammer creates thousands of accounts then, well, it's raining money for the site. clearly spammers won't do this so they will avoid the site. good! it doesn't solve all spam but it limits its spread. account bans have more weight to them.
to make such payment more attractive for users, offer to send that money to charity. this would clearly signal that the site is meant to be free, the paywall is there only to reduce the spam. it also makes it clear that this is a non-refundable fee.
i do see this mechanism on some sites. chrome web store, android play console, microsoft dev account, and probably many other sites ask for a one time registration fee.
but what if the site wants to allow free accounts too? for that let users invite each other. the invite-graph could provide a very useful insights for combating spam accounts. and have the invites regenerate over time, such as 1 per week up to a 6 max in total. so if a user has 6 or more invites, they won't get another further free ones until the remaining invites drop below 6. the limit can be adjusted based on the desired growth factor. limiting the free invites prevents a user from banking on their invites and then creating a lot of new accounts in short amount of time. this is how gmail started and there are many private communities that work like this.
perhaps also allow paying for other people's registration fee too. e.g. pay $100 and get 10 paid invite links.
this invite-or-pay will greatly limit the growth of the site. perhaps allow free registration initially and set up the above limitations only after the site grew to a healthy size and spam is getting out of control. not allowing unbounded growth is good anyway. small, focused communities are much healthier than free-for-all mega-communities.
creating a payment system can be tricky though. one approach would be to create a business account with paypal, stripe, revolut, paddle, lemonsqueezy, shopify, or similar company and then use their api to add payments to your site. but that's quite an involved process given the requirements these systems have. alternatively something like ko-fi or buymeacoffee could work for the initial setup too. i haven't really used them before but their api and webhooks seem relatively easy and build up on.
# edit on 2024-05-09
i realized that in @/msgauth i had another idea for limiting signups: authenticate via whatsapp. this means users would need a unique working phone number for each registration. getting those has some barriers so it might be a good way to limit spam registrations.
# edit on 2024-08-01
note to self, maybe platforms like github sponsors, opencollective, liberapay, goteo, etc could be used as a paywall too. https://wiki.snowdrift.coop/market-research/other-crowdfunding is a comparison site of various platforms. it's a bit outdated but at least it's nice to see which platforms are still around and thus are somewhat more established.
# edit on 2024-10-07
lobsters works on an invitation system: https://lobste.rs/about#invitations. seems to be working quite well for them.
published on 2024-04-15, last modified on 2024-10-07
# limits: create and enforce limits for web services
suppose i'm offering a web service for my own users and i want to protect it against abuse. i can already limit signup via the methods mentioned in @/signups. but that's not enough: i should also ensure no single user can degrade my service on its own.
# throttling
one approach is to throttle excessive usage. internet access is often throttled. sometimes it's advertised as "unlimited traffic" at "unlimited bandwidth". what really happens (in the better cases at least) that after certain amount of traffic the bandwidth is throttled to slow speeds. so the mobile carrier or isp might provide the first 10 GiB in a month at 1 Gbps and then the rest at 1 Mbps. i think that's fair way to limit services. but be honest about it: just explain the limits and don't just say "unlimited" dishonestly as a marketing ploy.
but services where throttling works well are quite limited. it could work for fluid-like continuous services where giving less amount of the service is also fine. e.g. take tap water as a subscription. this is usually implemented via paying after whatever amount the user used. an alternative solution could be to provide the users and homes with fix amount of water at full pressure. the pressure drops when that amount is exceeded. sure, people should be able to sign up for unlimited usage at full pressure but if most people don't need it, then let them safeguard their bills with limits like that.
# tokens
suppose i want to limit something more discrete: how many comments a user can post per day, how many images can the user upload per day, how many requests a client can make per hour, etc. then a token based system might work quite well.
suppose i want to limit that my developers don't run the expensive integration test suite more than 4 times per day on average. then i could create a counter that tells the user the amount of runs they have in balance. if it's zero then they can no longer trigger the test. and replenish they token count every day like this:
newtokens = max(newtokens, min(oldtokens+4, 12))
this also allows accumulating more tokens over time so they can burst if they weren't testing a lot the previous days. i think the ability to burst is important otherwise the service would be unfair to people who are not constantly online but want to use the service in a batched manner. e.g. a developer might prepare a dozen commits while disconnected from the network for a day or two and then wants to run all the tests at once. that should be supported too.
let the user queue up their usage once they are out of tokens rather than just flatly refusing to service their requests. e.g. in the integration test case the queued up tests could then run automatically at midnight when the tokens replenish. though note that excessive queuing might lead to other problems, see https://en.wikipedia.org/wiki/Bufferbloat.
but also let users buy tokens or simply bump the above limits with a regular paid subscription. so maybe i know one of my developers is super productive then i could let them regain 6 tokens per day up to 20.
# credit
i quite like fly.io's pricing structure. it has many different services, each metered separately. i can use them however i want and at the end of the month i get a bill. but each month they credit $5 worth of usage. so if i stay below $5 worth of usage, i'm using the site for free.
furthermore they allow me to pre-pay my account. if my usage exceeds the credit available on my amount, they just suspend my virtual machines. i find that pretty neat as it avoids surprise bills. i wish i could set daily limits though. i'd set the limit to $1 usage. so even if one day i get a ddos attack or i mess something up, the next day i can start over with clean slate.
they also have monthly subscription plans. higher tiers get me more features such as access to support. and whatever monthly subscription fee i pay, i get that amount of usage for free by the same credit method described above.
i think similar approach could work for many things where the service consists of many dimensions and i want to price each dimension separately. this way i don't need to think about freebie quotas for each dimension separately, just gift certain amount of the bill for each user each billing cycle.
# probabilistic rejection
the above where methods for limiting usage from a single user. but how could i protect my service against many users trying to use it simultaenously?
suppose my server can have only 200 inflight requests at any given moment. the simplest approach is to simply reject any request that would cross the 200 inflight requests thresholds. but this makes the website go down way too suddenly.
smooth this out with probabilistic rejection. accept all requests until 100. then reject incoming requests with a probablity of (inflight - 100)/100. if there are 150 requests in flight, requests will be rejected at 50% probability. at 200 inflight requests, they will be rejected at 100% probability. the full formula for the probability would be this, assuming n is the maximum amount of inflight requests and u is the current usage: max(0, (u - n/2) / (n/2)).
if possible, add smart retry logic to the client side, similar to what i wrote about in @/postreqs. or just tell the user as it is: the site is under load, come back a bit later and try again. hopefully it will drive away just enough users to keep the server load under control but not more. this way load should be smoothed out leading to smaller peaks with slower but relatively smooth experience on the user side.
variants of this one can be used for many things where i want to limit many users trying to access a limited resource. limiting new account creation, new comments in a thread, tickets for events, etc. think of it like lottery.
# cooldown
there's also @/cooldown which i use for the completely anonymous and registration-free comments below. i think that's a pretty generic technique too.
opening up a service to the internet can be scary. but gracious use of various forms of limits can keep everything under control. this post is just a reminder for myself on what ways can i do that if i ever decide to write an online service.
published on 2024-05-06
# reactions: using limited emoji reactions for feedback can be useful
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/reactions.html.
this post was inspired by https://www.mcmillen.dev/blog/20210721-downvotes-considered-harmful.html. i like the idea in that post and here i'll just think loudly how to implement it in general.
more and more websites allow users to "emoji react" to the main content. for a nice example see a random popular github issue like https://github.com/golang/go/issues/15292. observe how most comments have some emoji reactions under them.
for a long time i didn't really like this idea. but i spent some time thinking about this and i think this can be pretty useful if done well. in this post i'll explore what features would make this a useful feature for me.
# emojis
but first let me add a disclaimer that i hate it when people communicate in pure emojis. to me it's like talking in ancient hieroglyphs. and if you encounter a hieroglyph you have never seen before then it can be pretty hard to look up what it means.
suppose you walk along a street and there's a red table saying this:
DISALLOWED: 🥨 🐶 🛼 🪅
you can sort of figure out what the first 3 means: no eating, no pets, no rollerskates. but what does the last one mean? and then you try to call your partner to ask what the 4th symbol means, how do you verbalize your question? unless you know that the author meant toys, you have hard time figuring out the meaning just from the picture.
words wouldn't have this problem:
DISALLOWED: eating, pets, rollerskates, toys.
i admit, i'm heavily biased here: i think verbally. i have to mentally verbalize or write things down in order to "think" and to solve problems. i can't think in pictures.
but there are people for whom pictures are easier. in that case there is an easy compromise: just display both.
DISALLOWED: 🥨(eating), 🐶(pets), 🛼(rollerskates), 🪅(toys)
easy peasy. and on user interfaces where there's little space, let me hover over the icon and the meaning should appear in a tooltip.
and i haven't even talked about the case where an emoji have completely different (sometimes opposite) meaning in different cultures. communicating with emojis across cultures without a reference to the meaning is very hard. or when corporations update the emoji pictures and retroactively change the meaning of past messages in subtle ways for better or worse.
tooltips were heavily used in early user interfaces such as in microsoft windows and microsoft office. i bet one wouldn't be able to figure out what each icon meant without the descriptions:
[non-text content snipped]
most emojis are just as cryptic for me. click on the picture to toggle the descriptions. in case of the above tool icons a full description was displayed in a tooltip if you hovered over the icon. and in menus you could see both the icon and the description to help build the mental association. once you familiarized yourself with a tool and its icon, you could comfortably use it from the toolbar. the toolbar was scary at first but things remained discoverable.
another nice example is this: https://github.com/erikthedeveloper/code-review-emoji-guide. here every emoji is well explained and with such guide in mind, i think using those emojis in communication is fine. to be fair, i'm not yet sure i'd like to see such emojis in my reviews yet. but if it's a must then it should be done with a limited set of icons and a guide to the icons.
the other big issue i have is that they are hard to type, usually require special tooling to enter them. i can't simply type them easily with a qwerty keyboard as i do words. well, some systems might allow me to type :thumbsup: and have a thumbs up emoji appear when presented to the user. if it's well accepted that emojis are always represented with english letters in the raw source, then maybe i can make peace with them. but i'm a bit sad that we are trending to revert the super useful invention of the alphabet to cavemen-like pictographic communication. are letters that hard to use? i'm sure i'm in the minority here and i should just move on (even if it feels going backwards).
so anyway, my point is that using pictures in communication is acceptable only as long as they paired with a written explanation that can be easily accessed. in that github example i don't see such explanations for the the various emojis. i don't know what it means when someone reacts with "rocket" to a comment. therefore i don't like that implementation. if you create an emoji reaction system, then create a guide describing how and when to use the various emojis.
# upvoting
all right but what's the case for such emoji reactions?
it's very common on mailing lists and on simple bug trackers that there's a long thread where people add a simple "+1" comment on its own. this is meant to signal that the given user also has the same problem and would like to see the issue fixed.
this is useful information. but at the same time it's very spammy and doesn't add much value to the thread itself.
i think it is efficient to have a dedicated +1 button to simply track the affected people without the spam. and then you can use this aggregated counter to determine what are the most important bugs to fix.
some projects explicitly call this out: https://go.dev/wiki/NoPlusOne.
"like" is similar. you can "like" a post or a video and then the website can use this information to compute the most popular posts.
so far so good.
# downvoting
+1 and like on its own is not enough because it cannot express disapproval. seeing the dislikes for an item is also very useful signal. in the issuetracker example maybe some people consider some bugs as a feature and don't want the bug fixed (cue https://xkcd.com/1172). then it's only fair that people can downvote such bugs.
once you have upvotes and downvotes and they can be trusted, then i can use that information to make decisions. if i'm in the mood for some funny relaxing videos then i can avoid low quality clickbait videos by avoiding downvoted videos. or if i'm a creator myself, i can use this feedback to see if people like or don't like my content.
for an example see github issues. it allows sorting by emoji reactions, see the sort dropdown on the right. example: https://github.com/golang/go/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc. check out other emojis as well, such as thumbs down, tada, rocket, etc. unfortunately this emoji set is pretty bad but more on emoji selection later.
unfortunately nowadays there's a trend towards removing or hiding negative feedback. mostly because some people don't use such tools constructively. they use it to harass people, e.g. dislike every post a person makes regardless of content.
then the creator is left wondering why their post has so many negative votes. they have all this negative feedback with no explanation and it makes them feel bad. solution? remove the possibility to react negatively, right?
that's not the right solution. the problem is not that people feel bad but rather content creators can't know why something was downvoted. this hints to an alternative solution: let the downvoters tell why they are downvoting something. a simple one-click "-1" or "dislike" button is not enough. make it at least two-click!
# demo
i've cobbled together some html to show what i have in mind in broad terms. you need javascript enabled to see the demo below. let's take a twitter-like post where people can emoji react to.
you can upvote and downvote a post. the score of the post is then upvotes - downvotes. it's displayed as the first thing right after the post. by default it's +13 because there are 25 upvotes and 12 downvotes. (the exact scoring algorithm doesn't matter for this discussion, it's just an example.)
next to the score is a thumbs up button. you want to simply upvote a post? go ahead and push that button. upvoting a post only needs one click. (really, go ahead, it's just a demo.)
however to downvote you need to press the 3-dot button. it presents you a more complex form. you can still simply click "dislike". but you will get other very common reasons for disliking: "duplicate content", "inaccurate". clicking those would still count as a downvote but the creator and other users will understand better why people don't like something.
but often the predetermined categories don't express all the nuance why someone doesn't like something. those people can add a more detailed comment into the "comment" text box. a sample of those comments is then showed in the feedback form. then the creator and also other users can have an even better understanding why others like or don't like something. try entering something in the box after selecting a reaction to see how the form changes. (in my @/ffpoll post i advocate for similar free form comment box for polls too.)
a similar mechanism can be used for flagging post for moderators, see the remove row. moderators can prioritize their moderation queue more efficiently based on the signals why something was flagged.
[non-text content snipped]
here i categorized the reactions into 3 broad categories: upvotes, downvotes, removal requests (moderation request). assigned 3 reactions to each category. maybe it makes sense to have 4 for each category but not more than that because then the interface can get overwhelming.
i keep the generic dislike reaction. but if people still complain about unexplained dislikes then the form can be further tweaked. replace "dislike" with "other" and require for that option a comment. then the creator can simply ignore the "other" reactions with clear conscience if they don't contain a meaningful comment. or such meaningless comments could be even flagged for removal (see the red flag if you hover or touch a comment).
i propose that even upvoting has multiple reaction options. suppose a disaster happens and someone makes a tweet about the event. some people feel weird to "like" such tweets. so in that case people can react with "hug" (or something similar) and still upvote the tweet to popularize it.
select the emojis for the user to choose from carefully. make sure they represent the most popular orthogonal reactions. the more difference is between them, the more useful the data will become. i've picked the demo's 9 emojis without mouch thought. in a real service this would need some research.
the comment that can be attached to the reaction is limited to 120 characters. it's meant to add a short explanation for the reaction. it's not meant for discussion. for discussion the user should be able to reply to the post properly. discussion responses also create a notification for the poster. reactions shouldn't.
# moderation
the 3 reactions for the removal requests are especially handy for moderators. if multiple users mark a post as obsolete, then the post can be collapsed and greyed out but still accessible in general. it's meant to hide duplicate posts and other irrelevant but otherwise fine posts. moderators can then undo this if such action was inappropriate.
if multiple users mark a post as "inappropriate" then the system can automatically unlist the post without moderation intervention. remove the usernames in unlisted posts just to ensure people cannot go witch hunting right away. then later a moderator can make the decision to completely delete the post if it's truly inappropriate. stackoverflow uses such community based moderation. if 6 users flag a post as spam or rude, it gets locked: https://stackoverflow.com/help/privileges/flag-posts. also note how flagging requires the reporter to select why something is flagged. the idea is very similar to what i describe here.
(sidenote: in general i like stackoverflow's approach to moderation. from https://news.ycombinator.com/item?id=39425274: "the only thing that scales with the community is the community".)
if a user marks a post as sensitive, the post would be made immediately unavailable. this is meant for posts that unintentionally contained sensitive data such as phone numbers or addresses. given the grave effect of this action, this reaction wouldn't be available to everyone but only for trusted users. or users who went through some training material explaining the button. and any misuse would result in temporary bans if needed. such bans should be scary enough if signing up to the service is hard per @/signups.
# anonymity
should the reactions be anonymous or public? in the github instance i've linked above it's public, you can see the list of usernames for each reaction type if you hover over the reaction with the mouse.
i'm not fully sure about this but i think making the votes anonymous is better. it might allow for meaner comments. but at the same time the creator will see more honest feedback.
e.g. you might want to avoid giving a negative reaction to a friend to avoid souring the relationship. but if it's anonymous, you would feel more free to give a honest reaction.
and as for mean comments: users should be able to flag the individual free-form comments for moderation. and then mean users can be tempbanned to cool down a bit.
it's not a hard rule though. in some cases it makes more sense to have the names associated. e.g. in technical discussions where you might want to use such feedback to guide decisions and want accountability. but any way you choose, make who can access this data clear enough for the users.
# update rate
avoid updating the scores in real time. some people would be obsessively reloading their post to see the feedback streaming in real time. the system should not encourage such obsessions.
update the stats only every hour or two. this also makes the system easier to implement and cache. no need to build super efficient realtime data aggregation systems.
and make sure if i react, there's at least 1 hour before my reaction appears in the stats. so if i react at 13:58, the 14:00's update won't contain my vote, only the 15:00 one will. this way it avoids the edge case where someone shares a post and then 5 minutes later they can check on the reactions and deduce how certain people reacted even in an anonymous feedback system.
# creativity loss
there's another subtle downside to allowing reactions. people start craving the positive feedback. so if a post doesn't generate a lot of positive reactions, the creators will revert to content that does generate lot of reactions. this is often easier to consume, lower quality content. the creator will lose its unique voice. in other words there is a loss of originality and authenticity in the process.
but this effect has to be counterweighted with the fact how useful seeing such feedback on content is. i'm super annoyed that whenever i look for movie trailers on youtube and i get all these "concept" fake trailers. the annoyance comes from the fact that such trailers are often not clearly labeled. e.g. the concept bit is the last word in a very long title. they are clickbait so they get lot of views. then the channels keep churning them out which then spams the search results.
i'm not against creators creating them but they should be clearly marked as such. if not, then users could tag such videos with the "inaccurate" reaction. and then the search could allow me to filter out "inaccurate" videos. that would be nice.
overall i think the benefits outweigh the drawbacks so i think it's worth having this system.
# reviews
i think such feedback system could be used for reviews too instead of the 5 or 10 scale systems that is common today. https://apenwarr.ca/log/20231204 (NPS, the good parts) is a good article explaining all the downsides of such scales.
not giving the full score to a gig worker (such as uber driver or delivery person) in a review could result in the worker losing their job. at that point the review system loses its value because most people don't want to mess up other's life for a small mistake. the reviews are then not fully honest.
instead just boil down the feedback into two "overall positive" and "overall negative" categories. and from those let people choose a sub-reaction that best describes their experience.
in case of videogames (because that's what i'm most familiar with) you could have this:
the reviewers then would need to decide whether their feeling about a game is overall positive or negative. and then they would need to choose the sub-category that most closely matches their feeling.
when comparing game a vs game b and you see that the first has score 7 and the latter has score 8, does that really give you good information? those scores are super subjective. but when i see that game a's review is "good gameplay" vs game b's is "good story" then i can compare games already. i might opt for the former because gameplay is what i want from games. i'd look for movies or tv shows if i want good stories anyway.
another way to approach this is to allow reviewers pick multiple reactions, not just one. so a game could be marked as "good gameplay, good story" but also as "short, buggy". in a 5 scale rating system that would mean a 3 but in this detailed system i get a much better understanding what to expect from this small structured piece of information.
such multi-option could be allowed for the emoji reactions too but i'm a bit wary of it because it might be a bit too complex to use and reason about.
# summary
to summarize my thoughts: emoji reactions (and review systems) are currently a bit fluffy and don't give much useful information for users. but with some tweaks and in exchange for a little bit of complexity these could be turned into super useful data. i hope various systems will slowly pick up such changes in the future.
published on 2024-05-13, last modified on 2024-09-23
# redir: implement shortlinking via a redirect script
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/redir.html.
[non-text content snipped]
[non-text content snipped]
[non-text content snipped]
it's nice when in markdown source i can simply write i/15292 and it auto-linkifies to my project's issue tracker. or when i can write cs/file:regexp/regexp.go and it links to a code-search of the given query in my project's repository. or when i can have goto links like go/ref#Operator_precedence.
[non-text content snipped]
what's even better? if i can type those queries into my browser's url bar and i'm navigated to the desired sites right away.
(i am using go as an example project, i am not affiliated with it.)
in this post i describe a hacky way to achieve that in a simple manner with only a minor usability drawback. i'll focus mostly on how to make this work in the url bar.
# dns search list
one very complicated approach to achieve this is to create a redirect service such as redirect.mycompany.com. then add redirect.mycompany.com to the dns search list (see man resolv.conf). then when in the browser you type i/123, the local dns resolver will try to resolve i.redirect.mycompany.com first.
i will not elaborate on this because this is hard to set up, hard to maintain, insecure because you can't do https, etc. i don't recomment this at all.
# browser extensions
another approach is to use a browser extension for this. https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webNavigation is one api with which this can be implemented.
i haven't really tested this but you would need something like this in the extension:
// this is where the shortlink -> full url logic would live. function expandurl(url) { ... } function navigate(ev) { if (ev.frameId != 0) return; chrome.tabs.update(ev.tabId, {url: expandurl(ev.url)}); }; chrome.webNavigation.onCommitted.addListener(navigate, {url: [{urlMatches: '^(i|cs)$'}]});
you would need only the tabs and webNavigation permissions. and i think this works even when clicking on shortlinks in a webpage. but a less intrusive approach to an extension would be to install this as a search engine. then it wouldn't work for clicking but you can have the rewriting still happening when you enter such a shortlink into the url. see the search_provider setting at https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/manifest.json/chrome_settings_overrides.
there are couple commercial extensions that implement this idea one way or another, for example:
these seem to only work for the go/ prefix, not any other such as i/ or cs/. maybe those are configurable too, not sure. but here i'm talking about creating all this ourselves anyway.
creating a browser extension is somewhat involved. and a lot of people are a bit uneasy about installing random extensions. it's very easy to change an extension to mine bitcoin without the userbase ever knowing about it.
but if we can accept a bit less usable solution, then there is another solution to this.
# redirect page
first create a static redirect page hosted on your site or github pages (for free). it's a html page which has a javascript that redirects based on the parameter after the hash (#) part in the url.
<script> if (location.hash.length >= 2) { let newurl = ... // compute where to redirect based on location.hash. window.location.replace(newurl) } </script>
for demonstrational purposes this page is such a redirect page:
but you can go moar crazy and redirect based on full urls too (prefixing with https:// is also fine):
sidenote: if your static file server ignories the query part of the url then you can put the redirect part after a ?q= and have the javascript redirect based on that instead.
# urlbar keyword
once there's such a page, all what's needed is to hit that page when a shortlink expansion is desired. all solutions will require a keyword. suppose the keyword is `s` as in shortlink. then in the url bar you need to press s, then space, and then you can write the shortlink.
so to go to a github issue, you would need to type "s i/123" into the url bar and press enter.
i'll document how to set this up on a desktop. i'm not sure about mobile phones, i'm not even sure i would care enough to have this over there.
# firefox: bookmark with a keyword
in firefox you can assign keywords to bookmarks. so bookmark this site: click the star button on right side of the url bar. then find the bookmark in the bookmark menu and click edit. append "#%s" to the url and add a keyword like this (the screenshot adds "s" as the keyword):
[non-text content snipped]
and that's it. i don't fully understand why chrome doesn't allow assigning keywords to bookmarks.
there's another quirky way to achieve the same. firefox apparently adds a "add a keyword for this search" for most simple input elements in a form. right click on this input below and click "add a keyword for this search" to add it. pick a keywords such as "s" to be able to save it:
[non-text content snipped]
# chrome: custom search engine
follow https://superuser.com/a/1828601 to add a custom search engine in settings at chrome://settings/searchEngines. use https://iio.ie/redir#%s as the url.
i think you can make the keyword empty (or mark the search engine as default) and then it becomes the default search engine. then you won't need to type a keyword to trigger the redirection. if you do this then make sure the redirector is a passthrough for most urls.
one advantage of putting the search query after the hash (#) and then do the translation locally is that the search query won't be sent to the server. that's because the server won't see the part after #. i type all sorts of sensitive garbage in the url so this approach reduces the risk of my garbage appearing in various server logs.
# firefox: custom search engine
in firefox the option to add custom search engines is hidden by default. you can enable the add button like this: https://superuser.com/a/1756774. then a similar approach should work as described above for chrome.
alternatively, you can set up https://developer.mozilla.org/en-US/docs/Web/OpenSearch xml for the redirector service. then the user can install the search engine relatively easily.
the site needs a <link rel=search ...> tag then then you can right click in the url bar and add the search engine from there. i have that for this page. right click in the url bar and select "add iioredir" from the context menu. and then you have to manually assign a keyword for it in the firefox settings. search for "search shortcuts" in the settings to find the form for this (about:preferences#search).
this way of adding search engines is not supported in chrome because they feel it leads to clutter for users, see https://stackoverflow.com/a/75765586.
# rules
all right, the user can use the redirect page. but how to implement it? how to represent the redirect rules? how to represent the rule that transforms i/123 to https://github.com/golang/go/issues/123?
requirement: the rules should be easy to parse in both javascript and go. javascript support is needed to make the redirection work in the url bar without specialized server software. the go support is needed to support applying the redirect rules to markdown (assuming the markdown renderer is in go). so when the user writes i/123 in the .md file the generated html page will contain a link to https://github.com/golang/go/issues/123. this avoids an unnecessary hop to the redirect service and makes the link work for users who don't have any redirection set up.
(the downside of skipping the redirection service is that you cannot track how often a rule is used. if you care about that then it might make sense to rely on a redirection service. but i recommend not tracking it, it creates all sorts of wrong incentives.)
to make things easy to implement, i propose representing the rules as a text file with the following syntax:
i'd support two forms of rules: simple prefix replacement and complex substitution. the github issue redirection could be described via two simple prefix replacement rules:
rule i [a-zA-Z] https://github.com/golang/go/issues?q= rule i .* https://github.com/golang/go/issues/
the first one leads to the search site. so typing i/regexp would search for issues about regexp. but if the user types a number, they would get to the page with that id. testcases can describe this more clearly:
test i https://github.com/golang/go/issues/ test i/123 https://github.com/golang/go/issues/123 test i/is:closed https://github.com/golang/go/issues?q=is:closed
websites can be easily added with the same syntax:
rule twitter.com .* https://nitter.poast.org/ rule x.com .* https://nitter.poast.org/ test twitter.com/carterjwm/status/849813577770778624 https://nitter.poast.org/carterjwm/status/849813577770778624 test x.com/carterjwm/status/849813577770778624 https://nitter.poast.org/carterjwm/status/849813577770778624
complex replacement would be needed whenever you want to extract bits of the shortform and convert them into a more complex url. this would trigger whenever the replacement contains a $ symbol. hypothetical example:
rule aclcheck ([a-z0-9]*)/([a-z0-9]*) myaclcheckservice.example.com/check?group=$1&member=$2 test aclcheck/employees/alice myaclcheckservice.com/check?group=employees&member=alice
or here's a youtube -> indivious example:
rule youtube.com ^watch.*v=([a-zA-Z0-9-]*).* https://yewtu.be/watch?v=$1 test youtube.com/watch?v=9bZkp7q19f0 https://yewtu.be/watch?v=9bZkp7q19f0
the exact syntax for the replacement is described at https://pkg.go.dev/regexp#Regexp.Expand. javascript follows similar rules.
ensuring the same regex works both in javascript and go is important. but that's why i propose that the datafile contains tests. they can run for both the go and javascript implementation to make sure they work across platforms.
here's an example implementation in go: @/redirgo.go. and here's an example implementation in javascript: @/redir.js. look for the newruleset() and the replace() functions. the javascript one is the actual implementation that's driving the redirect rules on this page.
the main reason that i have separate keyword and pattern parts in the rule definition is efficiency. the replace logic splits on the first / of the query and treats the first part as the keyword. and that allows to quickly filter the rules. this way the implementation doesn't need to try matching all regexes which can be slow if there's a lot of rules.
# goto links
another common usecase is the "goto links". these are links in the form of "go/someid" and link to some other website. and then users can freely set up new go links. this is the idea behind https://golinks.io and https://trot.to.
(i'd use the e/ slash for such usecase because it's shorter and still easy to pronounce. the "e" can mean "entry link". but i'll stick to the go/ nomenclature because that's what's commonly used.)
it should be easy for users to add new go links. if you have a separate service for this then all you need is this rule:
rule go .* https://goto.mywebsite.com/
and then the users would edit such links in that service.
but if you don't then let users simply add the goto rules directly into the rules file:
rule go ^blog([/?#].*)? https://blog.go.dev$1 rule go ^book([/?#].*)? https://www.gopl.io$1 rule go ^ref([/?#].*)? https://go.dev/ref/spec$1
then go/ref#Operator_precedence would link to https://go.dev/ref/spec#Operator_precedence.
currently it looks a bit ugly with the `rule` syntax if i want to be able to append stuff after the url such as in the go/ref example. but you could add a `gorule` directive to better handle the specialness of it. then you could write something like this:
gorule blog https://blog.go.dev gorule book https://www.gopl.io gorule ref https://go.dev/ref/spec
perhaps you would also want some acls on these links so an intern wouldn't be able to steal popular links and link them to the rickroll video. but i won't go into that here.
# demo
for reference here's a demo that implements the above rules. you configure the data here:
[non-text content snipped]
and here are the test results (updated after each change):
[non-text content snipped]
# automatic linkification
when we have the rules, we can easily linkify text. suppose that the "replace()" function runs the substitutions. then the following snippet can markdown-linkify all simple instances of such links (warning: this is a bit simplified, doesn't handle all edge cases):
function replaceall(ruleset, text) { return text.replaceAll(/[a-z.]*\/\S*\b/g, s => { let r = replace(ruleset, s) if (!r.startsWith("http")) s return `[${s}](${r})` }) }
this transforms a text like this:
[non-text content snipped]
issue i/123456789 will be hard to fix. the problem is this bug: cs/f:tar/common.go+%22could+overflow%22.
[non-text content snipped]
into this form:
[non-text content snipped]
sidenote: currently on this blog i don't do such transformation. i trigger linkification only after the @ sign (though i do linkify http:// tokens too). this lets me write i/123, u/alice, etc type of text without worrying about unexpectedly getting linkified to the wrong thing later in case i ever add shortlink support to my blog. so if i want to have i/123 linkified by my markdown renderer (assuming i have a rule for i) then i would type @i/123. it does add some visual noise to the text but in exchange i have less worries. i might change my opinion on this later though.
# reverse rules
once you have all this then create an extension or bookmarklet that can create a shortlinks from long links. so when you are on https://github.com/golang/go/issues/123 and press the extension's button, it will copy i/123 to the clipboard. this way people can easily create shortlinks without needing to remember the exact rules. you can implement this in the same ruleset via having a "revrule" directive.
extension is nicer because it can create a nice button next the url bar and can support hotkeys too. if a bookmarklet is desired then https://stackoverflow.com/q/24126438 could help to keep it short.
[non-text content snipped]
# keep urls simple
ensure links in common tools have a simple, memorable url structure. then people are more likely to linkify things naturally.
linking things together is what makes the web great. it allows us to dig deep into things. wikipedia is great. i don't say everything should be linkified (e.g. every word linking to thesaurus). but do give linkable references where it makes sense. and if you are creating documentation tools then make sure that linking things in it is easy.
[non-text content snipped]
published on 2024-05-20, last modified on 2024-09-02
# tlogging: sample the current activity every 30 minutes
i used to struggle with focusing on work tasks. sometimes i just felt overwhelmed and didn't know which task should i work on. i solved this problem by always picking my oldest task in my todo list whenever i started feeling overwhelmed.
but even then, i often got distracted and made little progress. i then tried techniques like pomodoro. the idea is that i select a task and then work on that for the next 30 minutes while there's a timer on my desktop reminding me to focus. then i take a 5 minute break and repeat. this didn't really work for me either. i just found it to be way too much of a hassle to be this formal about my focus.
# stochastic time tracking
i kept looking for some tips and tricks and eventually i stumbled across the idea of work sampling or stochastic time tracking as described at http://messymatters.com/tagtime. the idea is that a tool regularly interrupts me and asks what i am doing. i describe what i'm doing (or what was i doing the past few minutes) and then i have a logfile with samples of activities. it's like linux perf-like statistical profiling but for humans. or like a reverse pomodoro technique.
i thought maybe better time tracking will help with my focus because i will better understand where my time goes and can adjust accordingly. this type of sampled logging immediately resonated with me because i did something similar when i was doing @/recording. i was experimenting with recording my screen and i wanted an up-to-date statusbar that displays what i am doing in each moment. i kept forgetting to keep it up-to-date so i created a nagging tool that asked me to periodically update the status. and whenever i did this, i had pretty sharp focus for some reason. the only difference here is that is that now i also record the timestamp whenever i add a status update and append the new status to a logfile.
back then i named the tool as tlog as in "time logging". i quite liked the name so i kept it and use it now to refer to the whole method.
i started doing this type of tracking about a year ago. i don't do random sampling because i found that annoying. i just sample myself each ~30 minutes. this works out quite well with meetings. they are usually 30 minutes so if i add a sample just before the meeting, i get the next nag right at the end. and then i can log a quick note about the most important takeaway from the meeting.
# work log
these samples give me very useful work log at the end of the week. i use that to fill a short weekly summary note which is encouraged at work. the weekly summaries aren't mandatory at work but i do it nevertheless. i used to feel like i was doing nothing interesting at work and was very unproductive and useless. but doing this weekly review completely eliminates these dark feelings. and if management ever asks me what was i doing recently or where is my time spent, i can always easily answer that question.
whenever i'm interrupted i either log what i'm doing right now or log where bulk of my time went in the past half hour to ensure the big items are covered. if i finished two tasks, i might add two samples. this biases the timing data but i don't really care about that. the number of samples is already enough for me to see where bulk of my time is going.
i don't mind the interrupt either. if anything, it helps me focus. it serves me as a reminder to get back to work if i started to drift in thoughts.
if i find that i was mostly slacking, browsing, procrastinating in the past half hour, i just log it as "slacking". i feel a bit bad whenever i do that so this encourages me to achieve some useful stuff by the next sampling. writing a nice beefy update feels good. this then motivates me to work more so there's a positive reinforcement cycle to keep me working rather than procrastinating. if i procrastinate too much then i feel depressed due to feeling useless. this method eliminates lot of procrastination for me so thanks to this i feel sad less often. i like this method just from a mental health perspective too.
# tool
this is the command line tool i wrote for this: https://github.com/ypsu/cfg/blob/acdf4f5/utils/tlog.go. if i run `tlog -w` it starts watching ~/.tlog. it wakes up every minute and checks the last modification of that file. if it's older than ~30 minutes, it emits an alert to the terminal via writing the "\a" byte. my tmux is set to highlight the window with an alert and my i3 also alerts me if an xterm window has the alert bit set. that's quite non-intrusive and even if i accidentally clear the alert status, one minute later the tool will re-alert so i can't miss it for long.
the second invocation mode is `tlog [message]`. this simply appends the argument to ~/.tlog along with a human formatted timestamp in the local timezone. so i can just run "tlog slacking" or "tlog myproject: i fxed the race condition bug" in any terminal to log my work from the past half hour.
i can also run `tlog` on its own and it starts vim in insert mode for me to write the update. sometimes editing in vim is more convenient especially if my update contains all sorts of weird quotes.
to review the updates i just open ~/.tlog in a text editor and read through the log. then i summarize the more interesting updates into the work summary tool mentioned above manually.
# log format
i like to categorize my time into broad groups. so the first token of my update is usually the project name and then comes the message. so i might write `tlog project1, some achievement` or `tlog pblog, finished the tool section of the tlogging post`. i use a comma to separate the group name instead of a colon just because it's easier to type, it doesn't require pressing shift. i just write `tlog emails` if i was just reading emails.
this helps me to see where my time is spent in a broad manner. it's clear which project takes up most of my time just by eyeballing the beginning of the updates.
i also track meetings in the form of "meeting, foo sync, discussed x, i proposed y and people liked it" or "meeting, team sync, discussed status updates, nothing interesting". having such data for my meetings is nice to have in case i want to see how much time i spend in meetings and how useful they are.
# consumption log
i've seen a high level, tech-lead-like person who summarizes every document he sees for his own reference. basically every time he reads a work related design doc or roadmap, he makes a 2-3 line long note about the most important takeaways into a notes doc of his own. then if somebody references a doc or asks him about a doc he read a while ago, he doesn't need to re-read it again. i think such summarization is really helpful in remembering the things you consumed. and it also works as a work log of where his time was spent.
i also see a lot of bloggers keep similar notes for all the books they read albeit not timestamped. example: https://sive.rs/book.
i already have lot of private notes about various things. i have a notes file where i just keep appending various notes. for every note i give a short title and timestamp. i wrote about an earlier version of my workflow of this in @/task.
often it's not the note what's the most helpful in this but the very act of intentional summarization. the act of reconstructing the new information with my own words in my own mental world can deepen the understanding and help retain the information.
nevertheless the notes are super useful. i often end up looking up some random tidbits in that file. but i don't do it for every document, every website, every youtube video i consume. mostly because it is a lot of effort: i'd need to open the editor, add the title, timestamp, etc.
now with the tlog tool this is much less effort. and i now started doing this for some content i consume. so my .tlog file not only contains status updates but reference information too. convenience of the tool really helps me be more proactive in keeping notes.
but i don't do it for every document i read or video i watch. i'm not at the level that guy i mentioned above. too much effort. maybe one day.
i also put short, completely random unstructured notes into it occasionally since it's more convenient than opening my proper notes file.
sidenote: i did consider the zettelkasten method for my notes at some point. but i found that a simple structure in a text file is more than enough for me, no need to overcomplicate my life.
# recommendation
doing this requires non-trivial effort. it requires me to periodically summarize my past half hour into a few words. it can be hard sometimes because i have a bad memory. so i only do this for work during my work hours for work items only.
so do i recommend this to others? nope because of the effort it requires. i think i only do this because i have some sort of obsessive mind. but i don't think most people care about such things or even care where their time goes.
even for me this was not the first experiment that worked (e.g. i tried pomodoro too). if a recommendation is needed then i'd say experiment with various things and stick to whatever enjoyable method that works.
published on 2024-05-27
# msgxchg: exchange secret messages instead of gifts in secret santas
imagine a typical central european school for kids. you have a group of 20 kids who visit the same classroom for 9 or so years. they know each other quite well.
because it's a tightly knit group, secret santa is a very common tradition there. the kids draw names and so they get a secret assignment: give a gift to the drawn person. wikipedia: https://en.wikipedia.org/wiki/Secret_Santa.
there are other places where this is done: among friends, among employees, etc. i just mentioned the school as an example of a tight group and because that's where i encountered this.
but ugh, i hated this. i never could figure out an adequate gift. i never really appreciated any gift i received. and it felt just a waste of money or effort, creates unnecessary thrash, etc etc. i thought and still think this tradition is silly. my love language is not gifts i suppose.
# alternative idea
i do like the intent behind the game though. so here's my proposal for alternative rules, one that might be more meaningful.
each person draws 2 other people randomly. this is mostly for redundancy reasons and to make the game more challenging. perhaps a software can do the random assignments to reduce the chance of a person drawing themselves.
then rather than giving gifts to each other give 2 messages to each target. the sender has two respond to 2 prompts:
in the second the recommended action should be something the sender truly believes will do good for the target. the target doesn't have to accept the offer but if they do, the person offering should carry it out.
# motivation for sending
both questions are tricky especially if i don't know the target person very well. if i can't think of a nice recent action then i would need to go and ask the target's friends of things the target person did recently and then write something i liked the most. this would increase my social knowledge about my group both by talking to others i normally don't talk to and by learning something about the target. and socializing outside of my comfort zone is healthy anyway.
the second question has similar benefits too. but it also makes me think harder: i have to figure out some activity i would be happy to do with the other person. so here i need to come up with something i can do even if i don't like the other person. teaches me to find common grounds.
here are some example offers:
# motivation for receiving
i think it would be nice to receive these too. it's nice to hear someone noticing and then calling out something i did. it makes my heart warm.
the offer thing might be hit and miss. but maybe deep down i do think i need help with math but i never really asked for help. here i can give it a shot for free. or maybe i do think i should socialize more, i'm too much of a loner. an offer for a chat in such case can worth much more than a cheap chocolate.
these gifts have the chance to be truly meaningful for the receivers. with two received offers the receiver will have higher chance to receive something useful.
# process
when the game begins, the participants get the assignments from the admin (e.g. the teacher in the school example). they have 2 weeks to come up with the messages.
then the admin has 1 week to review the messages. there's still a chance that some people might send mean messages or the messages don't fit the prompts. when the admin sees invalid messages, they work with the senders to improve them.
then the messages are revealed to the recipients. this is done privately, other participants cannot see what others received or from whom. and participants should not reveal these even freely to avoid embarrassing the senders and to avoid peer pressure on people who truly don't want to reveal their received messages. such no-reveal commitment might increase the sensitivity and thus the personal value of the messages. it creates unique, secret bonds between random pairs of people. it's called /secret message/ exchange for a reason.
the admin part can be delegated to a computer program if the group is tight and the chance for a misunderstanding is low.
(the admin cannot take part in the game itself. an alternative solution to administration is peer review: each participant nominates a trusted message reviewer from their peers. the reviewer will have a chance to review the incoming messages for and work with the senders to improve them if needed without leaking the contents and the identities before the reveal time. hopefully this would prevent receiving messages that are truly bad fit. the reviewer would significantly bias the messages so i'm not sure this is a good idea but it's worth considering.)
and that's it. this can be played with online friends too, no need for physical presence or even sharing real names or addresses. not that i'll ever play this game since i'm no longer in school or part of a tight group. but something i wish i could have tried instead of the useless gifts. maybe next life.
(also if this is something you would want to try with a group and you need an app then let me know, i can whip together a simple page for this on this site. mention the variant you need. in exchange i'd like to know if people enjoyed this or not after the event.)
published on 2024-06-03
# rssfeed: add rss feeds to your blogs
an rss feed is a page that a simple blog such as this one can provide to report the latest posts on the blog in a structured format. then rss feed reader software can periodically fetch these and show the users the latest posts across many blogs. users can follow others without algorithmification and ads. pretty cool.
for an example of a feed go to my @/rss page and check out the source via the browser's source explorer.
for feed reader i recommend feedbro because it's a free, locally running browser extension and doesn't need registration to online services. there are many others, possibly better ones so do look around. (i haven't, explained in the next section.)
rss used to be a big thing while google reader existed. it became a bit obscure after that shut down but still many sites provide it and there are many feed readers.
in this post i want to compare rss to the alternatives and then give some implementation tips.
# addictive
disclaimer: i don't use any feed readers, i don't follow anyone with rss. in general i find most of these things too addictive. when google reader was a thing, i spent way too much time in it.
it's worse than something like a facebook or tiktok feed where you just get garbage content that you can easily quit after you spend 3 seconds thinking about it. your own feeds are personally curated, probably high quality and interesting so it makes it harder to quit.
but i did it nevertheless. now i rely on my memory to follow blogs and video channels. whenever i am in the mood for some fast-food equivalent of browsing, i just go through the blogs i remember, type their urls (i use no autocomplete) and check for new content manually like cavemen. if i forgot the exact url then i just websearch for it. if i forgot about a blog completely then good riddance, i just saved couple minutes for myself. then later if i rediscover the forgotten blog then it's always a nice present to read many new posts.
but nevertheless, rss is cool. i don't really like some of the quirks of it but at least we have a standard.
note that there's also atom, very similar, i don't fully understand the differences, but consider everything i say here apply to that too. atom seems to be using iso 8601 timestamps so it must be better. but iirc rss is more popular term that's why i talk about rss. i don't go into the technical details too much anyway.
# alternative: social network
one alternative to rss is to re-post the content on social networks and then the on-platform followers will get a notification about it if they are lucky. lot of people do this. if growing and reaching a large audience is the goal this is probably unavoidable.
it's a bit unreliable because as far as i am aware these megacorps take following and subscriptions more as a hint rather than an actual request. so a new post might be completely hidden for the followers.
and it isn't suitable for all followers: not everyone is comfortable letting megacorps know what they are following. not everyone has accounts on social media sites.
# alternative: web's push notifications
apparently javascript allows webpages to register for push notifications. then the website can remotely wake up a service worker in the user's browser to show a notification. this works even when the user doesn't have the page open! to reduce the creepiness of this, the service worker must show a popup so then the user notices that the website's code ran in the background and can disable that. (the service worker can skip showing a notification but if it does it too often, it will lose the notification permission and thus the permission to run without the page being open.)
this is pretty anonymous so it might be good for following sites. but it requires installing service workers so if the user changes browsers or clears site data then they might lose the subscription.
followers would get a notification on their phone or desktop immediately when the new content appears. i think this is very annoying so i'd guess not many people would sign up for this anyway.
to be fair, this is quite interesting technology, i might make a separate post about this later.
# alternative: email
the content creator can create a newsletter to which the users could subscribe via providing their email address. then the creator just sends out an email whenever a new content is published.
this can be convenient for the user because they can use their advanced email filters to categorize their subscriptions. if it's allowed then followers could even reply to the content and increase their engagement with the creator.
and it's also nice for the creator: they can see the number of their followers. and these are most likely real, interested followers since subscribing to a newsletter is a bit harder than subscribing to a channel on social media.
there are several problems though:
so i suppose email is somewhat okay but it's messy and still might not reach all people.
# alternative: rss
rss on the other hand is very simple to set up and can be served from a static website. just fill a couple fields in the xml such as publish date and title and it's done.
most rss feeds (mine included) also put the content into the feed but it's somewhat tricky. and each feed reader displays the feed slightly differently. if this is a concern, the feed could just contain a link to the main page. its main purpose is just to notify users of new content anyway. that's what i did for a while until i beefed up my feed generator.
some people put all content into the feed resulting in huge feeds. i recommend against this, just keep a couple months worth of content in the feed to keep it short (assuming it's updated regularly). otherwise the frequent fetches by the various readers can cause undue load. a small feed should be fine because i think most people only care for the freshest content anyway. for the "i-want-to-read-everything" usecase i recommend creating separate archive pages. my blog has that too, it's the github backup link at the top of the @/frontpage.
see http://rachelbythebay.com/w/2022/03/07/get for some other tips to reduce the load. (not sure why that post doesn't consider reducing the size of the feed though.)
the downside of rss is that it requires specialized tools so it won't reach many people either. but it's the cleanest subscription mechanism for the followers because it doesn't leak much towards the site. of course an evil rss feed could do some shady tricks provide like personalized rss feeds or pixel tracking but the other alternatives can be worse.
# implementation
i don't go into generating the feed itself, there are other, better pages for that. just couple notes on what to do once the feed is ready.
add something like this to each page's html header:
<link rel=alternate type=application/rss+xml title=domainname-or-other-title href=link-to-rss>
in my case i have this:
<link rel=alternate type=application/rss+xml title=iio.ie href=rss>
this will allow rss feed extensions to automatically recognize rss feeds in the page and the user can add them via one click. usually this is why they have the "access your data for all websites" type of permissions. (not sure if that can be disabled in the extensions if that's a privacy concern.)
for the love of god, set the content disposition as inline for the rss feed. it's so aggravating when i click on someone's rss link and i get the browser's intrusive download prompt. what am i supposed to do with that? with inline disposition the browser will display the raw xml. but at least i can easily copy paste the rss link from the url bar. serve it from an "rss.txt" if your static file server determines the disposition based on the filename.
for bonus points add styling to that raw xml via https://en.wikipedia.org/wiki/XSLT. xslt is pretty cool. the server serves raw xml data and then xslt transforms that into a nice user interface without any javascripting. i do this on @/rss but my xslt skills are very basic so i just do the basic transformation of showing the title (@/rss.xsl).
# recommendation
if you have a blog, add an rss feed to it, because it's a relatively simple static content that only needs updating whenever new content is added. give people choice how they follow you.
btw, if you want to follow something (e.g. such as a youtube channel) as an rss feed and the main site doesn't seem to be providing them then look for rss feed generators. e.g. for youtube the invidious frontend (such as https://yewtu.be) does provide the feed in a convenient place: it's the link in the top right corner on a channel's video page. (yt provides it too, but it's somewhat hidden, see comments.) web search can find generators for other popular sites. there are even generic newsletter to rss converters such as https://kill-the-newsletter.com/. or there's https://newsblur.com/ which provides an rss like interface to popular sites.
rss is nice way to follow content you care about rather than what megacorps think you should see. (but keep in mind that it can be addictive.)
published on 2024-06-10, last modified on 2024-06-17
comment #rssfeed.1 on 2024-06-16
YouTube provides feeds, e.g. https://www.youtube.com/feeds/videos.xml?channel_id=UCK8sQmJBp8GCxrOtXWBpyEA
comment #rssfeed.1 response from iio.ie
ah, thanks, adjusted the text accordingly.
# tame: tame the inner animal
there is this unexplainable thirst for sexual intimacy in me. i cannot explain or describe it. i think a lot of people have it otherwise the porn industry wouldn't exist. it's fascinating to me, i call it my inner animal since it's driven purely by instinct rather than reason.
i struggled with it for a long time. it made me depressed because i didn't know how to quench these desires. i made lot of weird posts stemming from this when this blog was relatively young. but nowadays it's no longer a problem. i find some of those posts pretty dumb in retrospect but decided to keep them up. i think it's ok to be dumb on the internet, i find such blogs more charming.
anyway, i decided to jot down what helped me the most to tame my inner animal. such a post might end up being useful for me later in case it gets too wild again and i need a reminder. i'm not saying these things are generic and work for everyone. no, these are the things that worked for me and i'm just describing my experience.
# exposure
about five years ago i moved to switzerland. to my luck this country is more lax about sexuality.
during my teenager years i struggled with nudity. i avoided public showers, i didn't shower instead, that sort of stuff. but here in switzerland lot of saunas are naked. so we went to saunas with friends several times, all bare, everybody else bare, no shame. exposing myself and seeing others does help get out of this irrational body shame stuff.
but more importantly, here in switzerland there are various courses about sexuality. tantra massage, orgasmic meditation, that sort of stuff. they are multi-day group classes: they explain the basics such as communication, consent, respect, arousal, anatomy, massage oils, various types of touches, etc. then the attendees are randomly paired, they get naked, and practice tantric massage on each other. and then teacher goes around and gives you feedback including stuff like whether you hold your partner's lingam (penis) or touch the partner's yoni (vulva) correctly. all very casually like in a cooking class.
i can highly recommend attending these courses if someone has such interests. they aren't restricted to switzerland only, they can be found in other countries as well. it might take a while to find the right web search keywords, it's not something widely advertised. sometimes they can be found by finding a few nearby independent tantra massage providers and looking for a "studies" or "certification" section on their website.
i attended a few of these and quickly normalized human bodies, touching others, etc. it helped reversing most of my bad thinking habits i had. this shit should be taught in high school. i mean as an optional practical class where interested pupils can learn and experience sexuality in a very intimate way with the help of professional models. if these things are given to teenagers in a practical but controlled manner then maybe they will make less mistakes and/or end up less psychologically damaged. but the world at large is probably not ready for this shift yet.
# therapy
so what if i have a deep desire to have a sexual experience with a different person than my life partner? do i spend thousands of dollars on therapy to try to suppress those emotions or medicate them away? well here's a cheaper way to address that: just have sex with another person if that's what you want. therapy doesn't get simpler than that.
the first step in this process is to open up to your partner. if you have any thought that is bothering you, you should tell that to your partner. that's what partners are for: to help each other. and chances are, assuming you have a reasonable partner that you trust, you can find some middle ground, some compromises, some experiments, etc. it might have some emotional costs but they might just be worth it if the partner doesn't want to live with a depressed, sad, lifeless person.
but this requires a rational partner. if sex is too much of an irrational taboo or monogamy has irrationally high value for them, then it can be tough. but even then, the communication channel must be established and be tread very carefully. it's hard but doable. i even experimented with stuff like @/optioning to help me bring up some topics in a slower manner.
and for communication in sexual desires i highly recommend the @/touch game. it helped me to become more assertive about what i want rather than simply hoping for the best. after a dozen sessions i sort of learned what particular activities make my inner animal the happiest. then i just ask for them from my partner and the obsessive thoughts stay at bay for much longer.
but yeah, i've done sex with professional escorts a few times, and it did calm down the desires for variety. it was a bit scary the first time but it gets easier as one gets more experienced. mature, independent escorts can give a really streamlined experience.
i didn't start with escorts right away though. i started with a few erotic massages and worked up my courage from there. this slow approach was also easier on the partner.
the point is that i don't feel depressed anymore, so i have no regrets. i'm very fortunate to live in a country that doesn't make a big deal from basic human desires and let people buy their happiness if they need it.
though note that this doesn't mean that the animal is fully gone from me. when it sees all barely clad ladies walking around in the hot summers, well, it still goes crazy. but at least i can now manage these uncontrollable emotions without going too crazy myself.
initially i thought i need a secondary relationship and live in a some sort of polyamorous setup. but my partner pushed back against that and it's not like i could find another crazy person willing to enter a relationship with my lazy ass. then i explored the escorting aspect and a few occasions of it turned out to be enough for me. it's much less hassle than maintaining a relationship. the first idea is not always the best. relationships are too much effort anyway, one is more than enough for me. it might be entirely the case for others that a secondary relationship would work better than transactional sex. others might just need counseling or a psychologist. everyone is different.
# busyness
the other thing that helped taming the animal is that i learned to be busy. a few years ago i couldn't finish the simplest of projects. nowaday i can finish anything i put my mind to. i always have a little hobby project i work on every day. @/mornings describes my daily habit (albeit it's a bit dated, i have my streamlined my morning routine since).
the benefit of this is that it keeps my mind busy. it busyloops about my project rather than exploring depressive thoughts and then spiraling into depression. i don't have time for depression. even if i feel like wanting to feel sad, it must come after i make progress on my current project. but the knowledge that i made progress makes me happy and then i don't feel sad anymore.
# age
and the other thing that is changing is that i'm getting older and so does my inner animal. these desires are much less intense compared to what i've felt in my twenties. maybe a few more years and they will completely evaporate and i will have one problem less.
# communicate
that was my journey. if there's one generic advice i could distill then that would be this: communicate.
that's how i started. i started writing dumb blog posts as a means to explore my thoughts. the writing itself didn't solve my issues but it helped me to start talking, exploring and trying things and eventually i found what i need.
don't be shy, talk with your partner or @/stream your thoughts anonymously onto the internet. it's fun!
published on 2024-07-08
# slackday: move all internet slacking to a dedicated day
whenever i felt pressure or uncertainty at work, i often turned to aimless browsing (such as hackernews) or watching youtube videos. they give me a nice relief like smoking does for the smokers. but just like with smoking, i quickly get addicted to the distraction. i then constantly need the distraction to put my mind back into the comfort zone. the need for distraction then seeps into the afterwork hours too: then i watch youtube all night and then feel overwhelmed and barely make any progress on anything.
fortunately i have a regular @/reflecting habit that catches me spiraling into madness. this habit forces me to periodically reevaluate my life and come up with new @/habits if i feel i need adjustments. this time i came up with the idea of the weekly slackday.
the idea is simple: i commit to read discussion boards, watch youtube, check blogs, look up trivia on web, etc. strictly on friday. it's like the cheat day in diets but for the internet. if i'm itching on a different day then tough luck for me. my allowlisted itch-scratching options are: writing the itch down, freewriting, leetcoding, exercising, walking, showering, daydreaming, etc. i particularly like leetcoding. tackling an easy problem is the simplest but satisfying distraction i can do that i don't feel too guilty about.
if i feel i want to look up something then i add its link or search query to my slackday todo entry. then on slackday i go through the queued stuff. simply writing the itch down helps calming it down.
# the effect
i feel more productive. i don't have access to distractions that are never-ending time sinks. so in the i end up circling back to my todo lists. if i'm not sure what to do, i just look at my oldest assigned worktask and see if i can make some progress with it. most of the time i can do that. and then i get surprised how much stuff i can get done if i don't distract myself.
my interests also seem to be changing. because my time is limited i focus more on content that is more relevant for me. i spend more time on reading golangweekly.com articles and watch less kurzgesagt videos. the latter is too generic and i never learn much from it. but it's easy to consume so i never found a way to stop watching it. now it's easy: i simply don't have time for it on my limited friday anymore.
oh and eat less junk food like potato crisps too. i used to eat it when watching youtube. but now less time for youtube, less time for junk food too.
in @/tlogging i mentioned i don't do consumption logging. but that's because consumption happened haphazardly, in the evenings or in small breaks, where such logging is inconvenient to do. but now that i'm spending time on consumption in a structured manner, doing consumption logging is easy. i started having short notes about various interesting blog posts that i could then later refer to when in future i try to make a post about those topics.
i'm doing this only for a little over two months now so the effect could be chalked up to https://en.wikipedia.org/wiki/Hawthorne_effect too where the increase in productivity is not due to the specific change but due to being more mindful about productivity stemming after any change. nevertheless i feel pretty confident that this has a net positive effect on me. in any case i'm also writing this post to remind myself to go back to this in case i start slipping in the future.
# 100% rule
100% rule is the explanation why such absolute commitment works:
100% commitment is easier than 98% commitment.
the short story is that i don't have to waste time thinking whether i can watch a youtube video this evening or not. it's simply not allowed, end of story. here are some links explaining the idea in more detail:
i highly recommend picking up this mental trick to deal with addictions. or a variant of it: in my case i don't fully reject the internet intake, i just limit it in a very predictable manner and that works for me. the point is to pre-make the decisions so i don't have to agonize about the same problem over and over thorough the day.
published on 2024-08-05
# featver: a semver compatible calendar based versioning scheme
i don't really like https://semver.org. it discourages code cleanups. even small changes such as removing an old unused function require a major version bump. i prefer the more natural piecewise evolution, time based guarantees and calendar based versioning schemes.
unfortunately the go tooling really insists on semver. while the semver schema is enforced, but its contract part isn't. so i came up with an alternative schema+guidance that looks like semver, is calendar based, and gives time based guarantees: https://ypsu.github.io/featver.
maybe i'm overthinking this. i'll experiment with it in my upcoming hobby projects and see.
published on 2024-09-02
# difftesting: review effect diffs instead of unittesting
four years ago i wrote @/goldentesting. my opinion didn't really change since. in fact lately i'm coding more so i had a stronger desire for the tool i outlined there. i thought a lot how i could it make convenient in go+git and came up with these modules:
effdump is where i spent most of my efforts and i think is the most interesting. it's a bit hard to explain what this is succinctly so here are two guided examples:
to make the package name unique and memorable i went with the term "code effects" instead of "output" or "golden output". so the library names are "efftesting" and "effdump". if i'm ever frustrated with tests, all i need to think of is "eff' testing!" and then i can remember my library.
# example usecase: my blog's markdown renderer
here's an example usecase i have for effdump. in this blog i have the source text of these posts in markdown and i have a hacky markdown renderer that converts these posts into html. the rendering happens in the server whenever a post is fetched (the result is cached though).
sometimes i change the markdown rendered, e.g. i add new features. whenever i do that, i want to ensure that i don't break the previous posts. so i'd like to see the rendered output of all my posts before and after my change.
effdump makes such comparisons easy. i just need to write a function that generates a postname->html map and effdump takes care of deduplicated diffing across commits. now i can be more confident about my changes. it makes programming less stressful and more of a joy again.
# example usecase: pkgtrim
here's another example where i used this: https://github.com/ypsu/pkgtrim. it's a tool to aid removing unnecessary packages from a linux distribution. in archlinux the list of installed packages are scattered in many files.
in order to test pkgtrim's behavior, i keep complete filesystems in textar files. i just grabbed my archlinux installation database and put it into a textar file and made pkgtrim use as a mocked filesystem. so my diff tests don't change even if i alter my real installation on my system. and i could add mock filesystems from my other machines too and see what pkgtrim does with them.
whenever i made a change i could immediately tell what the effect was across many inputs. i could immediately tell if the diff was expected or not. if i liked it, i just accepted the diff. if i didn't like it, i continued hacking. but otherwise i didn't need to toil with manually updating the unit test expectations. developing pkgtrim was a breeze.
# caveats
but i'd like add some caveats about output testing in general. they have a bad rap because they are hard to get right.
it's very easy to create an output that has spurious diffs after the slightest changes. e.g. outputting a go map will have random order. care must be taken that only the truly relevant bits are present in the outputs and any indeterminism is removed from the output. e.g. map keys must be sorted.
the outputs are easy to regenerate. this also means they are easy to skip reviewing and fully understanding them. it's up to the change author to remember to review them. because of this, it's less useful for larger teams who might find such tests too cryptic. on the other hand in single person projects the single author might find them extremely useful since they probably know every nook and cranny in their code.
another effect of "easy to accept even wrong diffs" is that it might be less suitable for correctness tests. it's more suitable where the code's effects are rather arbitrary decisions. e.g. markdown renderer, template engines, formatters, parsers, compilers, etc. you could just have a large database of sample inputs and then generate sample outputs and have these input/output pairs available during code review. then the reviewer could sample these diffs and see if the change's effect looks as expected. this could be a supplement to the correctness tests.
but also note that these days a lot of people write and update the unittests with artificial intelligence. people can make a code change and just ask the ai to plz update my tests. so the difference between the two testing approaches is getting less relevant anyway.
so output tests are brittle and easy to ignore. but they are not categorically wrong just because of that. there are cases where they are very good fit and makes testing a breeze. one needs a lot of experience with them to ensure these tests remain useful. unfortunately the necessary experience comes only after writing a lot of brittle and ignored tests. chances are that you will create anger to your colleagues if you do this type of testing.
caveats and disclaimers given, proceed with this approach on your own risk.
# diffing
diffing text is our fundamental tools in software engineering. distilling the effects of the application into human readable text and then diffing those can help a lot to understand the changes. it's the human way to make sense of the immense complexity of the world. there's a nice post about this here: https://exple.tive.org/blarg/2024/06/14/fifty-years-of-diff-and-merge/.
so go forth and distill effects into diffable texts and then learn through these diffs!
published on 2024-09-09
# pkgtrim: a linux package trimmer tool
this post is about my little https://ypsu.github.io/pkgtrim project.
i tend to install too much crap on my machine and never uninstall them. this always bothered my minimalistic senses but i wasn't sure how to deal with this situation.
a friend showed me nixos and how you can have a config file and then drive the system installation from that config. i didn't really like nixos, it felt a bit too complex for my simple needs. but i really liked the config driven part.
the other thing he showed me was https://github.com/utdemir/nix-tree. this is a package explorer for nixos. it can also tell you the list and size of the "unique dependencies" for a package. these are the packages that have no other reverse dependencies other than the given package. i really liked that because those are all the packages i could get rid of after uninstalling the given package.
my system is archlinux and after that meeting i was wondering how to have an intent driven installation and allow me to explore package relationships in a simple manner. i think i've managed to figure it out. this is what i came up with: https://ypsu.github.io/pkgtrim/.
the ~/.pkgtrim on my small rpi4 contains all packages i need along with a comment why i need them. while setting it up i've managed to delete some garbage from my system. now i could easily reinstall the whole machine, run `pkgtrim -install`, and end up with the same packages installed as i have now. and i can keep the .pkgtrim file in my dotfiles repo. i think i will sleep better now.
oh and i used my new @/difftesting approach to develop this. writing this tool was a breeze!
published on 2024-09-16
# starglob: simplified glob for simple needs
lately i had multiple cases where i wanted to have the ability for the user to select multiple items with wildcards with a glob-like matcher:
furthermore there could be multiple matchers and an entry should be considered matching if it matches any of the matchers.
one way of describing a matcher is using regexes. so i'd use "linux-.*" and "outputs/.*" in the above examples. but i don't like this because regexes are verbose (i need the . before the *), are ambiguous whether they partially or fully need to match, and are unnecessarily powerful for the above usecases.
interestingly i have a similar problem with globs. ordinary globs are non-trivial too: https://pkg.go.dev/path#Match. i don't need most of these features either.
so i ended up using a very small subset of globs: just the * is special and it can match arbitrary number of characters. anything else is matched verbatim, including ? and [. these globs must fully match. example: "linux-*" would match "linux-v1.2.3/alpha" but not "somelinux-v123".
i'm not sure if this subset has a name but i went with the name "starglob" for simplicity. that's what i need 90% of the cases so might as well make my user interfaces use starglob by default.
another big advantage of this is that this is easy to implement, even to match with multiple matchers:
// MakeRE makes a single regex from a set of starglobs. func MakeRE(globs ...string) *regexp.Regexp { expr := &strings.Builder{} expr.WriteString("^(") for i, glob := range globs { if i != 0 { expr.WriteByte('|') } parts := strings.Split(glob, "*") for i, part := range parts { parts[i] = regexp.QuoteMeta(part) } expr.WriteString(strings.Join(parts, ".*")) } expr.WriteString(")$") return regexp.MustCompile(expr.String()) }
it just makes a single regexp that matches if any of the starglobs match. empty set of globs match only the empty string. to make the empty set match anything, i can add this to the beginning:
if len(globs) == 0 { return regexp.MustCompile("") }
and that's it.
sidenote: in this implementation * matches path separators too like /. no need for a separate ** syntax for that. most of the time such restriction is not needed so this is fine. it would be easy to add if needed though: first split on "**". then split the individual components on "*" and join those with "[^/]*". then join the "**" split with ".*". but again, this is rarely needed.
demo:
func main() { glob := flag.String("glob", "", "List of comma separated starglobs to match.") flag.Parse() matcher := MakeRE(strings.Split(*glob, ",")...) allfiles, _ := filepath.Glob("*") for _, f := range allfiles { if matcher.MatchString(f) { fmt.Println(f) } } }
prints all matching files from the local directory. e.g. to print all source files:
go run starglobs.go -glob=*.go,*.c,*.cc
easy peasy.
published on 2024-09-23
comment #starglob.1 on 2024-09-23
Implementing this via RE seems extraordinarily wasteful given the construction cost. Have you looked into this at all?
comment #starglob.1 response from iio.ie
agree that it is inefficient to construct. but i'd expect that it's rare that a user would pass my application a long list of complex globs that this starts to matter. matching should be ok in terms of performance.
i haven't looked into optimizing this much. if i wanted a faster or more featureful globbing (e.g. one that supports both alternatives and **) i'd probably go with a package. e.g. https://pkg.go.dev/github.com/gobwas/glob and https://pkg.go.dev/github.com/bmatcuk/doublestar both look nice.
this post is just short snippet that is easy to copy paste into my future projects when the simple needs don't warrant adding a complex dependency.
# goref: express the non-nil pointer annotation in go with a generic alias
i don't write a lot of typescript but i occasionally dabble in it. i wrote this typescript code recently:
// maybeVariableX and maybeVariableY type is string | null. // variableZ type is string. // then i had this code: if (maybeVariableX != null) { variableZ = maybeVariableY }
i got this error:
Type 'string | null' is not assignable to type 'string'.
i was... pleasantly surprised that this was caught. amazing! i wanted to have maybeVariableY in the condition, i just had a typo.
this thing is called "union types" in typescript. i don't really want that in go. but is it possible to have similar nil-check in go?
i found a nice suggestion to use & for non-nil pointers here: https://getstream.io/blog/fixing-the-billion-dollar-mistake-in-go-by-borrowing-from-rust/. but that requires a language change, that's too big of a change.
based on https://go.dev/blog/alias-names now i could have a simple package like this to represent pointers that should not be nil:
package ref type Ref[T any] = *T
it doesn't do anything in the go compiler, it doesn't create a new type. i can assign Ref[T] to *T and vice versa just fine. now i could write a code like this:
func f(s ref.Ref[string]) { fmt.Println(*s) } func g(s *string) { f(s) }
this compiles just fine. it has a semantic problem though: g takes a potentially nil pointer and calls f which wants a non-nil pointer. but a sufficiently smart linter could give a warning here similarly to typescript above! and it wouldn't give the same warning for this code:
func g(s *string) { if s != nil { f(s) } }
is this useful? would i use it? i don't know. but i wouldn't mind playing with it.
i'm not up for writing such a linter though. just wanted to express the desire that i'd like to try such a linter.
note 1: technically you could have achieve this previously without generic aliases too by writing `alias RefT = *T`. the downside of that is that you need to do that explicitly for each type. or you could use some special `/*LINT:nonil*/` comment next to the var where you want non-nils. the downside of that is that it doesn't get included in the generated godoc so users might miss it. both of these lack the right ergonomics. i think the `type Ref[T any] = *T` might be just simple enough that it can catch on.
note 2: i can imagine using such aliases for other linter-only annotations too such as const, e.g. `type Const[T] = T`. not that i want const annotations. i fear go is getting too complex.
published on 2024-09-30
# goerrors: annotate errors to save debugging time
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/goerrors.html.
instead of:
if err := barpkg.Frobnicate(bazpkg.Twiddle(key)); err != nil { return err }
always write:
package foopkg ... if err := barpkg.Frobnicate(bazpkg.Twiddle(key)); err != nil { return fmt.Errorf("foopkg.Frobnicate key=%q: %v", err) }
in other words: always %v (not %w) wrap errors with a detailed but succinct unique error identifier before propagating the error up. doing so gets you the best errors. that's it. thanks for coming to my ted talk.
[non-text content snipped]
[non-text content snipped]
nuance? yes, there's nuance to this.
for a long while i wasn't sure how to think about errors and always wondered what's the best way to go about them. after a good dose of stockholm syndrome i now love go's approach the best. there are a few concepts i had to understand before the big picture "clicked" for me.
# exceptions
go's errors are just ordinary return values. such system is often compared to exceptions. let's compare it to java. java has 2 types of exceptions:
which one to use? from https://docs.oracle.com/javase/tutorial/essential/exceptions/runtime.html:
If a client can reasonably be expected to recover from an exception, make it a checked exception. If a client cannot do anything to recover from the exception, make it an unchecked exception.
example for an unchecked exception: the code passes a null pointer to a function that accepts only non-null pointers. there's nothing the caller can do about this other than not calling it in the first place. so the fix here is a code change, not something that can be pre-coded.
another way to think about this: checked exceptions can be used for control flow. unchecked exceptions on the other hand can only be propagated up. they then end up in logs or presented to humans who can then do something about them.
# error domains
error values have similar 2 types (terminology from https://matttproud.com/blog/posts/go-errors-and-api-contracts.html):
this was a key realization for me that escaped me for years of programming.
if a function can return a domain error, it should be clearly indicated in its documentation. example "go doc os.Open":
Open opens the named file for reading. If successful, methods on the returned file can be used for reading; the associated file descriptor has mode O_RDONLY. If there is an error, it will be of type *PathError.
anything else should be treated as an "opaque error". such errors should be propagated up or logged/presented when it can no longer be passed upwards. they should be never used for making control-flow decisions.
in general return opaque errors unless returning a domain error is explicitly needed. fmt.Errorf allows wrapping errors with both %v and %w: https://go.dev/blog/go1.13-errors#wrapping-errors-with-w. wrapping with %w keeps the error a domain error. therefore in most cases use only %v to ensure the returned error is opaque.
# annotating errors
the main difference of error values to exceptions is that such propagation has to be done manually after each function call that can return an error. but this becomes super handy! let's take this example:
package userpkg ... func VerifyPassword(request *http.Request) error { ... hash, err := sqlpkg.LookupUserColumn(request.FormValue("username"), "hash") if err != nil { return fmt.Errorf("userpkg.LookupHash username=%q: %v", request.FormValue("username"), err) } ... }
with "return nil" type of error handling you might get this log entry:
/login failed: not found.
what was not found? with java-like exception handling you would get stacktraces too:
/login failed: NotFoundException sqlpkg.LookupUserColumn userpkg.VerifyPassword handlerspkg.RequestHandler
still unclear what was not found. but maybe this time one could make reasonable guesses what the problem might be after few hours of code reading. with the practice of adding handcrafted context at each level the log message could be this:
/login failed: handlerspkg.VerifyPassword request-id=12345: userpkg.LookupHash username="": userdb.NotFound
from this error message the error is immediately apparent: the request's form params don't contain a valid username. probably a form validation missed this before.
note that the stacktrace is not needed at all. while the stacktrace helps to locate where the error happened but it doesn't tell us what exactly the error was. it doesn't tell the "story" how the code led to the error.
the stacktrace is also very verbose and visually jarring. the above is simple but in reality the callchain is dozens of lines and contains lot of useless fluff. each log entry is very long and makes scanning the logs hard. the handcrafted message is quite to the point. not only tells where the error is but it also tells how the code ended up being in that state. it takes out a lot of mystery detective work from the debugging sessions.
in the above case each error message fragment has a unique prefix string. the uniqueness is ensured by the pkgname/ prefix, more on this later. the callchain can be easily reconstructed from this in the very rare cases when needed via simple grepping. and the callchain can be reconstructed even if there were some refactorings in the meantime. in the stacktrace case a refactoring would change line numbers and then it would be very tricky to follow the code exactly.
there are bunch of proposals and libraries for stacktraces, see https://www.dolthub.com/blog/2023-11-10-stack-traces-in-go/. don't use them. if you do annotations well then you won't need them and debugging errors will be a breeze. stacktraces might allow you to get lazy with the annotations and you might end up having harder time debugging.
# unique error message fragments
it's super handy when you have an error message and from it you can jump straight to code.
one way to achieve this is using source code locations in the error messages. this is what happens when the error includes stacktraces. as explained before this is quite verbose and spammy. furthermore the message on its own contains very little information without the source code.
another approach: make the error messages unique. this contains more useful information to a human reading it than a source code location. but it also allows jumping to the source code location directly with a grep-like tool. and the jump works even if the code was slightly refactored in the meantime.
there are proposals to add source code location tracing to fmt.Errorf or a similar function: https://github.com/golang/go/issues/60873. this should not needed if you can keep the error message unique.
how do you keep the message unique?
the established pattern fmt.Errorf() adds a message, then a colon follows, then the wrapped error. to ensure it's easy to find where a error message fragment begins and ends make sure the fragment doesn't contain a colon.
don't do this:
fmt.Errorf("verify password for request-id:%d: %v", id, err)
but do this instead:
fmt.Errorf("verify password for request-id=%d: %v", id, err)
this will make scanning the errors for the fragments much easier.
but "verify password" might not be unique on its own. read on.
# error message wording
how to phrase the error annotation? keep it short. avoid stop words such as failed, error, couldn't, etc. this is painful to read:
/login failed: failed verifying password for request-id 12345: failed looking looking up hash for "": not found
when wrapping errors then make the message an imperative mood of what the function tried to do just because the imperative mood is short. always start it with a verb. this style is similar to function names. they also start with verb and use imperative mood. but don't include the function name in the message, focus on the action the function was doing when the error encountered. the function name often doesn't matter and would be just visual noise (especially if the function is just a helper). the caller can often provide more accurate context (sometimes it's the function name, sometimes is something better).
leaf level errors usually describe a bad state. it's ok to use passive stance for those (i.e. when not wrapping). example: "not found" in the above snippet.
some people advise this:
func RequestHandler(request *http.Request) (err error) defer func() { if err != nil { err = fmt.Errorf("RequestHandler: %w", err) } } ... }
no, don't do it. it will make the errors harder to use. first, it might lead to avoiding describing the exact actions the function was doing and adding the necessary details. second, it breaks the unique string benefits: a simple grep to find code for an error will no longer work.
so don't name it based on the current function, name the error after what the current function was doing when the error occurred. now concatenate the words, CamelCase them, prefix them with the package name and the result is a near unique string. instead of
/login failed: failed verifying password for request-id 12345: failed looking looking up hash for "": not found
the error is this:
/login failed: handlerspkg.VerifyPassword request-id=12345: userpkg.LookupUserHash user="": userdb.NotFound
more about this at @/errmsg.
# avoid redundancy in annotations
if you squint enough then all this annotation work is actually writing a story. each layer or function has a piece of the full story and they have to include that fragment in the story. but the story gets boring and hard to read if it contains redundant information. take this example:
func readfile(filename string) (string, error) { buf, err := os.ReadFile(filename) if err != nil { return "", fmt.Errorf("read file %q: %v", filename, err) } return string(buf), nil } func f() { fmt.Println(readfile("foo.txt")) }
the error message from this would say this:
read file "foo.txt": open foo.txt: no such file or directory
this is redundant. in this particular case it is fine to simply "return err". don't take the "always annotate" rule too much to the heart. annotation is often not needed when propagating errors from helper functions, small wrappers of other functions from the same package. this is how go errors can avoid the java-like verbosity where each helper function is also included in the final stacktrace. if you do this then add a comment to be clear about this:
buf, err := os.ReadFile(filename) if err != nil { // no error wrapping: os errors already contain the filename. return "", err }
unfortunately you might not know beforehand that io errors all contain the filename. so in that case it's fine to err on the side of redundancy. simply remove the redundancy once you see that some errors are hard to read due to this.
writing a good story needs good artistic skills. those skills come with experience. don't worry too much about it. just make sure the errors contain all the important bits, even if duplicated.
# control flow
there's one big problem with all this manual error annotation: it's super slow. the good news is that it only happens on the error path which should be the rarer codepath. that assumes that you don't use errors for ordinary code logic.
this example from above is actually bad:
package sqlpkg ... func LookupUserColumn(username, column string) (string, error)
compare it to this:
package sqlpkg ... func LookupUserColumn(username, column string) (value string, found bool, err error)
this latter form distinguishes found/not-found from a sql database error such as bad sql query or connection error or database corruption. the not-found condition could be very frequent. and as such it would be frequently used to make code flow decisions. e.g. a not-found condition would lead to user-friendly error message that the username doesn't exist while everything else would create a ops ticket to investigate.
checking that bool could be magnitudes faster than trying to extract the not-found condition from an error fragment. https://www.dolthub.com/blog/2024-05-31-benchmarking-go-error-handling/ has specific numbers for this, i highly recommend checking it out.
i recommend returning a dedicated return value for describing specific conditions if those conditions will be often used to alter the caller's codeflow. search for something like "exceptions code flow antipattern" or similar keywords to see more reasons why it's unhealthy to rely on having lot of logic in error handlers.
# preconditions
suppose "func f(v *int) error" doesn't accept nil pointers. one is tempted to add a "assert(v != nil)" like logic to it. don't do it. return it as an error: if v == nil { return fmt.Errorf("mypackage.CheckNil variable=v") }.
why? if the application crashes due to this then the developer gets just a stacktrace. if it returns an error then the rest of the callers build up a "story" how the program ended up in the bad state. make sure to support this debugging experience.
though it makes no sense to add an error return value just to return errors for such bad invocation. it would be annoying if sqrt() would return (float64, error). only do this if the error return value is already there.
# metaphor
this type of error handling might feel as unnecessary busywork. medical surgeons also complained how annoying it was to wash hands or disinfect the surgical tools. after all no harm is done if they don't do it, right? it turns out the harm comes much later. once the medical profession learned this, they decided to accept the cost.
annotating errors is similar. the value of them is not apparent. the value becomes apparent when problems start arising. my hope is that the coding profession will recommend always-annotated errors too instead of exceptions-like error handling once it observes how good error messages make our lives much easier.
# references
this post was inspired by reading many other blog posts. i probably forgot to list all my sources but here are some of them i remember:
# takeaways
there's nothing wrong with error handling in go. all those error handling improvement proposals? not needed! it's good as it is.
the only problem with go's error handling is that it's verbose: needs 3 lines. i'll rant about this in my next post, stay tuned.
as a summary here are my key points from this post:
edits:
[non-text content snipped]
published on 2024-10-07, last modified on 2024-10-26
# errmsg: use identifiers as error strings to make error searching easier
i used to debug a lot with debuggers. i no longer do so. why? because i no longer need to. if something goes wrong then i have a pretty good idea about the problem thanks to the error message from my tool. i've explained in @/goerrors how to make error messages this useful.
in that post i mentioned that the error messages are basically a story of how an application ended up in that bad state. but i was still unsure about the right format back then. i continued experimenting after that post and i found this format having the best tradeoffs (i already updated the post):
pkg1.CamelCaseAction var1=value1 var2=value2: pkg2.WrappedErrorAction var3=value3: pkg3.SomeBadState var4=value4 (some free form explanation in parens)
let me unpack that. go established the practice that each error has a string representation. this string representation always includes the text of the child error:
if err != nil { fmt.Errorf("<some string here>: %v", err) }
the question is, what should be <some string here> be? the answer is "pkgname.CamelCasedErrorName context_variable1=value1 context_variable2=value2 ...".
previously i used normal sentence-like strings but i found the identifier name much easier to work with! so instead of
verify password for request-id 12345: lookup hash for "": not found
the error message would be this:
handlerspkg.VerifyPassword request-id=12345: userpkg.LookupPasswordHash username="" (user not logged in?): userdb.UserNotFound
this gives each error string a near unique token. makes it super easy to search for without the need for quotes and knowing where an error fragment starts and ends. it takes a bit of practice to read and write them well but i am convinced the simplicity in exchange is worth it. also note how userpkg.LookupPasswordHash has some free-form hint on what the problem might be. most errors don't need such hints though.
the identifier names an action the function is trying to do when the error happened. similarly to functions it should usually start with a verb except for the leaf level errors.
i also allow nested tokens in complex functions. e.g. "handlerpkg.VerifyUser.LookupPasswordHash" would be acceptable in some but rare cases. keep it simple wherever possible though.
there are other things to keep in mind: avoid stop words, avoid redundancy, prefer opaque errors, etc. check out @/goerrors for more tips about error handling.
i started using this form even in log messages. works quite well there too. but that's a story for another day.
note from 2024-11-16: don't use plain "main." prefix in binaries, use the binary's name. the "main." would have too many clashes when searching otherwise. i am now using this style of error messages in more of my code and my life became so much easier! jumping to code right from the short error message is now super simple. i highly recommend doing this!
published on 2024-10-28, last modified on 2024-11-16
# cstatus: error code with a message is all i need for errors in c
as explained in @/goerrors and @/errmsg i'm quite fond of go's error handling. before go i coded in c. error handling always bothered me there. but i wonder now that i have some go experience: could i design something simple enough for c that i would be happy with? it turns out yes!
most error handling in c is just returning an error code. and my typical way to handle it is to put it into a CHECK macro. CHECK is like ASSERT but meant to be always enabled, even in debug mode. here's how it used to look like:
int fd = open(...); if (fd == -1 && errno == ENOENT) { // handle this specific error. ... } CHECK(fd != -1); // handle all other unexpected errors ... sz = read(fd, ...); CHECK(sz != -1); ...
the application just crashed when there was an unexpected error. as explained @/goerrors, debugging such crashes wasn't always easy.
# requirements
so what do i need? i really like the error code based error handling. that's all i need 99% of the cases: "if error is A, do B. if error is C, do D. ...".
but i also need the context to make understanding the error easy. this can be represented via a simple string.
so that's it: i only need an error code and a string.
# error domains
there's one hook though. errors have domains. examples:
notice how all of these codes are just small numbers. so here's the idea: error codes are 64 bit unsigned numbers (8 bytes). 6 bytes represent the domain as an ascii string, 2 bytes (0..32767) represent the error code from that domain.
take ENOENT from the errno domain. ENOENT is 2, the domain's ID is just "errno". encode it as the following:
0x006f6e7272650002 o n r r e
the "errno" is reversed here because most machines are little endian, so the bytes are stored in reverse order. printing 5 letters from the 3rd byte of that uint64 data blob gets "errno". in @/abnames i write more about my admiration of short names.
so somewhere in a header i would have this:
enum errnoCode { // ... errnoENOENT: 0x006f6e7272650002, // ... }
then i can do this in my error handling code:
uint64_t errcode = somefunc(); if (errcode == errnoENOENT) { // handle errnoENOENT } else if (errcode != 0) { // propagate all other errors as internal error. return canonicalInternal; }
but this on its own is not enough because it doesn't allow me to append context and nuance in the form of an error message.
# status
i really like grpc's status proto: https://google.aip.dev/193#http11json-representation. it's a bit overcomplicated to my taste so here let me simplify it to my code+message needs in c:
typedef struct { uint64_t code; int msglen; // excluding the terminating 0 byte char msg[]; // has a terminating 0 byte. } status;
that's it. all it has a code and a zero terminated string. it also uses the trick where the string is at the end of struct rather than at a separate memory block. this way the string buffer doesn't have to be freed separately.
in order to use this, i also need 3 helper functions:
// The returned status must be freed. // wrapped, if passed, is freed as part of the wrapping. status* statusNew(status* wrapped, const char* format, ...); status* statusNewDomain(status* wrapped, uint64_t code, const char* format, ...); status* statusAnnotate(status* wrapped, const char* format, ...);
there's lot to unpack here so let me demonstrate this through an example. a hypothetical go inspired io module could have the following functions:
typedef struct { void* data; int len; int cap; } ioBuffer; status* ioOpen(int* fd, const char* filename); status* ioClose(int* fd); status* ioReadFile(ioBuffer* buf, const char* filename);
notice how all functions return a status pointer. the rule is this: NULL status means no error. non-NULL status means error.
the ioOpen and ioClose functions could look like this:
// ioOpen opens a file for read only. // On error returns an error from the errno domain. // The error message will contain the filename. status* ioOpen(int* fd, const char* filename) { *fd = open(filename, O_RDONLY); if (*fd == -1) { return statusNewDomain(NULL, errnoDomain + errno, "io.OpenForRead filename=%s", filename); } return NULL; } status* ioClose(int* fd) { if (*fd == -1) { return NULL; } if (close(*fd) != 0) { return statusNewDomain(NULL, errnoDomain + errno, "io.Close"); } *fd = -1; return NULL; }
they return errors from the errno domain. ioClose takes a fd pointer so that it can be passed already closed fd descriptors and do nothing for them. this will become handy if one uses the defer construct:
// ioReadFile appends the contents of the file to buf. // On error returns an error from the errno domain. // Most errors will contain the filename. // Always free buf->data, even on error. status* ioReadFile(ioBuffer* buf, const char* filename) { int fd; status* st = ioOpen(&fd, filename); if (st != NULL) { return st; } defer { free(ioClose(&fd)); } constexpr int bufsize = 8192; char tmpbuf[bufsize]; while (true) { int sz = read(fd, tmpbuf, bufsize); if (sz == 0) { break; } if (sz == -1) { return statusNewDomain(NULL, errnoDomain + errno, "io.ReadFromFile filename=%s", filename); } if (buf->cap - buf->len < sz) { int newcap = 2 * (buf->cap + 1); if (newcap - buf->len < sz) { newcap = buf->len + sz; } buf->data = xrealloc(buf->data, newcap); buf->cap = newcap; } memcpy(buf->data + buf->len, tmpbuf, sz); buf->len += sz; } return ioClose(&fd); }
note when there's no error, ioClose gets called twice. the second time it's called from defer. but it's fine because this time it will be no-op. this is a nice pattern from go to handle guaranteed close() and properly handle its error too on the error-free path.
so... umm... defer in c... yes it's possible with a non-standard compiler extension. it's super awesome, much nicer than gotos. but i cannot go into all tangents so just check out the full source code at the end of the post if interested.
oh, you noticed the "constexpr" bit too? it's not a typo, i didn't accidentally write c++. this is c23. welcome to the modern age.
there's lot more to unpack here... i won't do that for now, just marvel at the code until it makes sense.
# internal errors
in the above example the io functions returned an error from the errno domain. but most of the time the error is unexpected, doesn't fit into a clear domain. in that case return an opaque, internal error with statusNew(). opaque errors are not meant to be inspected or to be used in control flow decisions. they just need to be presented to a human through log messages or other form of alerts.
let's study a hypothetical "printFile" function that prints a file:
status* printFile(const char* fname) { ioBuffer buf = {}; status* st = ioReadFile(&buf, fname); defer { free(buf.data); } if (st != NULL) { return statusAnnotate(st, "test.ReadFile"); } size_t sz = fwrite(buf.data, 1, buf.len, stdout); if ((int)sz != buf.len) { return statusNew(NULL, "test.PartialWrite"); } return NULL; }
statusAnnotate keeps the existing domain code of a status and just prepends a context message. so test.ReadFile in this case would be an errno domain error. the caller could handle the errnoENOENT code (file not found) in a nice, user friendly manner.
test.PartialWrite is an opaque error because it was constructed via statusNew() which doesn't take a code. the caller shouldn't act on this error, just propagate it up. in this case it's triggered when fwrite() reports partial write. this could happen stdout if piped into a file and the disk is full. but there could be many other reasons. this function doesn't want to care about the various conditions so it just returns an internal error.
notice @/errmsg in action: because i use the identifier form for the various error conditions, it is much easier to reference and talk about them.
# wrapping errors
now suppose for some reason i'm writing a function that needs to return errors from the http domain. the errors can be wrapped like this then:
status* run(int argc, char** argv) { if (argc != 2 || argv[1][0] == '-') { printf("usage: test [filename]\n"); return statusNewDomain(NULL, httpBadRequest, "test.BadUsage argc=%d", argc); } status* st = printFile(argv[1]); if (st != NULL) { if (st->code == errnoENOENT) { return statusNewDomain(st, httpNotFound, ""); } if (st->code == errnoEACCES) { return statusNewDomain(st, httpForbidden, ""); } return statusNewDomain(st, httpInternalServerError, ""); } return NULL; } int main(int argc, char** argv) { status* st = run(argc, argv); if (st != NULL) { printf("error: %s\n", st->msg); free(st); return 1; } return 0; }
then here's how the various error messages could look like:
$ ./test usage: test [filename] error: http.BadRequest: test.BadUsage argc=1 $ ./test /nonexistent/ error: http.NotFound: test.ReadFile: errno.ENOENT (no such file or directory): io.OpenForRead filename=/nonexistent/ $ ./test /root/.bash_history error: http.Forbidden: test.ReadFile: errno.EACCES (permission denied): io.OpenForRead filename=/root/.bash_history $ ./test /root/ error: http.InternalServerError: test.ReadFile: errno.EISDIR (is a directory): io.ReadFromFile filename=/root/
notice how simple the resource management is. main() consumes the status, it doesn't propagate it up. in order to free it, it only needs a single free() call. easy peasy!
# creating domains
ugh, this is where things get ugly. this needs lots of boilerplate but magical macros can help a lot.
before i jump into this: i'm following go's naming convention even in c. if i work on the "status" package then all symbols are prefixed with status and then CamelCase names follow.
let's start with something simple: converting an at most 6 byte long string to a uint64. this is needed for getting the domain part of the code. here's how it could look like:
#define statusMKDOMAINID(str) ( \ (sizeof(str) > 0 ? (uint64_t)str[0] << 2 * 8 : 0) + \ (sizeof(str) > 1 ? (uint64_t)str[1] << 3 * 8 : 0) + \ (sizeof(str) > 2 ? (uint64_t)str[2] << 4 * 8 : 0) + \ (sizeof(str) > 3 ? (uint64_t)str[3] << 5 * 8 : 0) + \ (sizeof(str) > 4 ? (uint64_t)str[4] << 6 * 8 : 0) + \ (sizeof(str) > 5 ? (uint64_t)str[5] << 7 * 8 : 0) + \ 0)
then statusMKDOMAIN("errno") would give 0x6f6e7272650000.
whenever a new domain is defined, there are several structures that need to be defined:
fortunately x macros can make this pretty simple (https://en.wikipedia.org/wiki/X_macro). here's how the http domain could be defined:
constexpr uint64_t httpDomain = 0x707474680000; // statusMKDOMAINID("http") #define httpCODES \ X(http, OK, 200, OK) \ X(http, BadRequest, 400, InvalidArgument) \ X(http, Forbidden, 403, PermissionDenied) \ X(http, NotFound, 404, NotFound) \ X(http, InternalServerError, 500, Internal) \ X(http, CodeCount, 600, Unknown) #define X statusENUMENTRY enum httpCode { httpCODES }; #undef X extern const uint64_t httpStatusCode[statusCOUNT(http) + 1]; extern const char* httpCodeName[statusCOUNT(http) + 1];
the two additional arrays could be defined like this:
#define X statusSTATUSCODEENTRY const uint64_t httpStatusCode[statusCOUNT(http) + 1] = {httpCODES}; #undef X #define X statusNAMEENTRY const char *httpCodeName[statusCOUNT(http) + 1] = {httpCODES}; #undef X
the definitions of statusENUMENTRY, statusSTATUSCODEENTRY, and statusNAMEENTRY are ugly. i spare the reader from that. check the full source code at the end if curious.
# takeaways
aaanyway, there's a lot of fluff here, i know. and perhaps it looks a little bit overcomplicated. but i really enjoyed writing this c code. it's not much harder to write this than in go. and i can totally imagine happily using something like this in c if i ever program in c again.
a lot of this is a matter of tradeoff between complexity and ease of use. if the struct would allow incorporating custom objects (like how grpc does it) then it would require a much complex api. that would be very awkward to use from c. 99% of the time i don't need that so i think the simpler interface is better and i won't hate coding and error handling due to it.
the full source code is at @/cstatus.textar. there's a lot of things i didn't mention. there are some things that could be done better. but hey, future me, i don't code much in c, so be glad i documented the main points at least, ha!
published on 2024-11-04, last modified on 2024-11-16
# flagstyle: keep flags before the positional arguments
there are many schools of thought about command line flags:
as with everything with go, i found the ordering rule for the flags weird at first. but over time i learned to appreciate it. now it's my favorite style.
over time i also developed a few more rules i personally adhere to when passing flags:
when it makes sense i sometimes add checks to my tools to enforce the second rule to eliminate potential ambiguity.
but why?
# subcommands
some tools do this:
toolname -globalflag1=value1 subcommand -subflag2=value2 arg1 arg2
in this case -subflag2 is a subcommand specific flag and must come after subcommand. i personally don't like this. as a user i can't really remember which flag is global which flag is subcommand specific. this also allows redefining the same flag (such as -help or -verbose) twice and then the confusion intensifies. the form should be this:
toolname -globalflag1=value1 -subflag2=value2 subcommand arg1 arg2
when tool is initializing it should find the subcommand and register its flags into the global flag namespace. this should be done before all the flags are defined because the flag definitions depend on the subcommand. but extracting the subcommand without knowing which flags are bools is only possible if all non-bool flags use the "-flagname=value" form. that's why i enforce that form in my tools.
as an example let's take a hypothetical "compressor" application with two subcommands, "compress" and "decompress". running without any argument or just a -help would print a generic help message:
$ compressor --help usage of compressor: compressor [flags...] [subcommand] subcommands: compress: compress a file. decompress: decompress a file. use `compressor -help [subcommand]` to get more help.
running the help for a subcommand would print both the subcommand specific and global flags separately:
$ compressor -help compress usage of the compress subcommand: compressor [flags...] compress compresses a file. subcommand flags: -input string input filename. (default "/dev/stdin") -level int compression level between 1 and 9, 9 the best but slowest. (default 5) -output string output filename. (default "/dev/stdout") global flags: -force auto-confirm all confirmation prompts. dangerous. -verbose print debug information.
and it would also detect incorrect usage:
$ compressor -level 6 compress error: main.UnknownSubcommand subcommand=6 exit status 1 $ compressor compress -level=6 error: main.BadFlagOrder arg=-level=6 (all flags must come before the subcommand and must have the -flag=value form) exit status 1
both global and verbose flags must come before the subcommand:
$ compressor -verbose -level=6 compress compressing /dev/stdin into /dev/stdout, level=6, verbose=true.
see @/flagstyle.go for one potential (not necessarily the nicest) way to implement this. it uses reflection to magically create flags from structs. notice how the subcommand detection happens before flag.Parse(). that's only possible if all flag values use the -name=value syntax, hence the check for it.
# command wrapping
the command wrapping usecase is my primary motivation to have all flags as left as possible. take something like ssh:
ssh [ssh_flags...] [machine-name] [command] [command-args...] # example: ssh -X myserver uname -a
# go flag parsing: ssh -X jumphost ssh -X myserver uname -a # getopt flag parsing: ssh -X -- jumphost ssh -X myserver -- uname -a
you have to litter the commandline with --. some people like this sort of separation. but i am now using such commands extensively for years and i prefer to not have the -- markers. the former style gets natural very fast.
it might seem a rare usecase but at work i work with surprisingly many tools that have some sort of "pass/forward all subsequent args unchanged" needs:
i rely on these tools so much that i had to learn to keep my flags on left. then i might as well do it so everywhere. i started doing that and realized my life is much easier.
# short options
some people love short options. e.g. they can write "ls -lh" instead of "ls --long --human-readable". i don't miss short options in my tools. if that's really needed then perhaps make the first arg a short option collection like in tar or ps unix commands:
# create tar, verbose output, output file is output.tar: tar cvf output.tar file1 file2 ... # show all processes, format nicely: ps auxw
ls interface could have been similar:
# show permissions, owner, and name: ls pon directory1 directory2 ...
or if sacrificing the first positional argument feels too much then put all that into a single flag:
$ ls --help ... flags: -show=flags: pick the fields to show for each entry. ... $ ls -show=pon directory1 directory2 ...
# takeaways
in summary my recommendation is to only allow -flag=value form of flags and all flags must be on the left before the positional arguments. it's awkward at first but one gets used to it quickly and it allows combining commands in a more natural manner. this in turn leads to a more pleasant command line experience with fewer gotchas. shells have already too many gotchas anyway.
published on 2024-11-11
# funcdriven: use function driven tests instead of table driven tests
i would like to give my wholehearted endorsement to this article: https://itnext.io/f-tests-as-a-replacement-for-table-driven-tests-in-go-8814a8b19e9e.
it advocates to replace the table driven tests like
func TestStringsIndex(t *testing.T) { tests := []struct { name string s string substr string want int }{ { name: "firstCharMatch", s: "foobar", substr: "foo", want: 0, }, { name: "middleCharMatch", s: "foobar", substr: "bar", want: 4, }, { name: "mismatch", s: "foobar", substr: "baz", want: -1, }, } for _, tc := range tests { t.Run(tc.name, func(t *testing.T) { got := strings.Index(tc.s, tc.substr) if got != tc.want { t.Fatalf("unexpected n; got %d; want %d", got, tc.want) // line 32 } }) } }
with function driven tests like
func TestStringsIndex(t *testing.T) { f := func(s, substr string, nExpected int) { t.Helper() n := strings.Index(s, substr) if n != nExpected { t.Fatalf("unexpected n; got %d; want %d", n, nExpected) } } // first char match f("foobar", "foo", 0) // middle char match f("foobar", "bar", 4) // line 15 // mismatch f("foobar", "baz", -1) }
in case of error this is what you see in the former case:
> t.Fatalf("unexpected n; got %d; want %d", got, tc.want) funcdriven_test.go:32: unexpected n; got 3; want 4
in the latter case this is what you see in your editor:
> // middle char match > f("foobar", "bar", 3) funcdriven_test.go:15: unexpected n; got 3; want 4
basically the error message points directly to the place where the erroneous data is. makes working with tests super convenient.
i used table driven tests for a long time but i now switched over tho this. i confirm from experience that i find these much easier and more natural to work with.
and when ready for an even bigger leap of faith then use https://pkg.go.dev/github.com/ypsu/efftesting to automate away the manual maintenance of the "want" argument.
i am starting to like writing tests, yay.
published on 2024-11-18
# gorun: run go code straight from web via go run
i don't have moral problems with "curl https://example.com/sometool-install | bash". my biggest problem with it is that one should not program shell scripts in the 21st century. these shell scripts are not standardized: each script installs random crap into different places. and after i stop using the tool, the out of date trash remains around.
fortunately go has a much nicer alternative: "go run example.com/sometool@latest". or if the example.com isn't a git repo then: "go run github.com/example/sometool@latest". this will download, compile, and run the latest version of the tool. subsequent runs will use the cached binary. go needs to be installed on the user's machine but it's not a huge package, i think that's acceptable.
because it will compile everything on the user's machine, the tool needs to be compact: avoid huge code and sprawling dependencies. it's surprisingly easy to get lot of things done in go with the standard packages only. embrace that, some small duplication here and there doesn't hurt.
one downside of using the @latest tag is that it would trigger lots of redownload and recompilation as the tool gets developed. avoid this issue via using a dev branch for development. merge the dev changes into the main branch only on a weekly basis. but if the tip is broken then the user can always specify a specific version: "go run github.com/example/sometool@v1.23.45".
a special "prev" branch could be maintained for the previous release too which lags behind the main branch with a week. then users can run a simple "go run github.com/example/sometool@prev" to run the previous stable version if the latest one is broken. it might take a few hours until go caches pick up any changes in branches though. therefore update the prev branch a day before updating the stable branch to ensure the user can go back as soon as the @latest tag gets updated. (there's a trick to invalidate @latest cache by force requesting a new version but i haven't found such a trick for @branch references.)
the user can also perma-install with "go install github.com...". this one puts the binary into the default $GOBIN directory. user can run the tool without "go run" if that path is in $PATH. startup will be a bit faster but the tool won't auto-update.
users can set up aliases too:
alias sometool="go run example.com/sometool@latest"
i prefer shell wrappers to aliases because then i can use them from other tools such as vim:
$ cat .bin/sometool #!/bin/sh go run github.com/example/sometool@latest
that's all i need to put into my dotfiles repo, i don't need to litter it with complex makefiles and install scripts. and works out of box on both archlinux and debian and is always up to date.
there are a couple tools i now use like this:
later i plan to use this method for setting up new personal machines with a single command. another usecase i have is to run a static set of pre/post commit checks in my git repos without needing to deal with writing complex shell scripts.
example:
$ go run github.com/ypsu/textar/bin/textar@latest -help Manipulate .textar files. Create a textar file: textar -c=archive.textar file1 file2 file3 Extract a textar file: textar -x=archive.textar [...]
or in case you want to try it with a more official tool:
$ go run golang.org/x/exp/cmd/txtar@latest --help Usage of /tmp/go-build1914875111/b001/exe/txtar: -extract if true, extract files from the archive instead of writing to it -list if true, list files from the archive instead of writing to it -unsafe allow extraction of files outside the current directory -x short alias for --extract
very convenient. i wish companies would stop the curl|sh approach in favor of this. this has much better properties.
published on 2024-11-25
# envchain: a conceptually simpler alternative to go contexts
ever used environment variables in unix? it's a mechanism to subtly pass configuration or other data down into child processes. each child can then crawl through all the environment variables and change its behavior based on it. it's not very clean to rely on envvars but they are quite practical.
go context is very similar but for functions. it's also just a random collection of key/values that functions can crawl through and make use of. but go's context interface is unnecessarily complex: it also includes functions for deadlines and cancellation. and the interface for storing arbitrary data in a context is hard to use.
there's another interesting way to think of contexts i heard of: it's reverse errors. as of go 1.13, errors are chains of values passed upwards (https://go.dev/blog/go1.13-errors). errors can wrap or annotate other errors. this way the deepest function can communicate information to the topmost function. the upper functions can use https://pkg.go.dev/errors#As to extract specific keys from this linked chain of values.
go context is then the reverse: it's also a chain of values. but here the topmost function can communicate information down to the deepest function. in error's case functions willing to participate in such information up-passing must have an error return value. in context's case functions willing to partipicate in such information down-passing must have a context function parameter.
# env
anyway, with those thoughts in my mind, here's a way to implement such value downpassing in a minimalistic manner:
package envchain type Link struct { Parent *Link Value any }
envchain.Link is a linked list of any values. the package would have a helper to extend the chain:
func (env *Link) Append(v any) *Link { return &Link{env, v} }
and similarly to errors.As, there would be an an envchain.As:
func As[T any](env *Link, target *T) bool { if target == nil { panic("envchain.EmptyAsTarget") } for env != nil { var ok bool if *target, ok = env.Value.(T); ok { return true } env = env.Parent } return false }
this works similarly to errors.As: extract any value up the chain.
and instead of something like
package exec func CommandContext(ctx context.Context, name string, arg ...string) *Cmd
you would have this:
func CommandEnv(env *envchain.Link, name string, arg ...string) *Cmd
or just this if backwards compatibility isn't a problem:
func Command(env *envchain.Link, name string, arg ...string) *Cmd
sidenote: in general avoid overloads. it doesn't make a sense to have both non-env taking function and env taking function. if it turns out a function needs an env or context then just add it. it's similar to its error counterpart, it doesn't make sense to have both a void and an error returning function:
func MyOperation() func MyOperationWithError() error
the latter only makes sense if MyOperation must be kept intact due to backwards compatibility. i recommend evolving the the codebase and remove such redundancies to ensure the packages remain clean. major version bumps are annoying, @/featver is an alternative for people not taking go's semver rules too seriously.
# passing down values
you can pass down any value this way. e.g. to pass down and then later read out a username:
package mypkg type username string // create a new chain from the parent chain with username in it: env = envchain.Append(env, username) ... // to extract it: var u username if envchain.As(env, &u) { fmt.Printf("username is %s.\n", u) } else { fmt.Printf("username not found.\n") }
you can use this to pass down values without the immediate functions needing to know about this. much easier to use than https://pkg.go.dev/context#Context.Value. a common example (and one that the context specializes on) is cancellation.
# cancellation
cancellation could be implemented as a standalone package apart from envchain. e.g. a structure like this:
$ go doc abort.aborter type Aborter struct { // Has unexported fields. } func New(parent *envchain.Link) (*envchain.Link, *Aborter) func WithDeadline(parent *envchain.Link, d time.Time) (*envchain.Link, *Aborter) func WithTimeout(parent *envchain.Link, timeout time.Duration) (*envchain.Link, *Aborter) func (a *Aborter) Abort(cause string) func (a *Aborter) Deadline() time.Time func (a *Aborter) Done() <-chan struct{} func (a *Aborter) Err() error
it's similar to context's cancellation management. and can be used similarly:
env, aborter := abort.New(env) defer aborter.Abort("function ended") ...
and it would be pretty easy to provide context compatibility too:
func FromContext(ctx context.Context) *envchain.Link func ToContext(env *envchain.Link) context.Context
aborter would also honor the deadlines and cancellation from contexts up the chain.
to make it easy to extract the current cancellation status from an env, abort would provide these helpers:
func Deadline(env *envchain.Link) time.Time func Done(env *envchain.Link) <-chan struct{} func Err(env *envchain.Link) error
here's how the Done function could be implemented:
type Abortable interface { Done() <-chan struct{} Err() error } func Done(env *envchain.Link) <-chan struct{} { var a Abortable if envchain.As(env, &a) { return a.Done() } return nil }
this can extract the Done() from both Aborters and Contexts. it also works if the chain doesn't contain any of them: it returns a nil channel which blocks forever when read from (i.e. the context is never done).
a deadline function would be more complex since Aborter has a different (simpler) return value for Deadline:
var InfiniteFuture = time.UnixMilli(1<<63 - 1) type Expirable interface { Deadline() time.Time } // For backward compatibility with context. type expirable2 interface { Deadline() (time.Time, bool) } func Deadline(env *envchain.Link) time.Time { for env != nil { if d, ok := env.Value.(Expirable); ok { return d.Deadline() } if d2, ok := env.Value.(expirable2); ok { d, ok := d2.Deadline() if !ok { return InfiniteFuture } return d } env = env.Parent } return InfiniteFuture }
this is an example where walking the chain explicitly is helpful. this is why envchain.Link members are exported. otherwise this function would need to walk the chain twice when trying to look for both contexts and aborters.
the full source is available at @/envchain.textar. the aborter package is a bit slower than context because it is unoptimized, creates 2 goroutines per each new aborter. this could be optimized to 0 with an "abortmanager" object that can manage many channels concurrently with https://pkg.go.dev/reflect#Select without needing to create a goroutine for each. the first aborter in the chain would create an abortmanager, the rest of the aborters would register into that. but all this is beside the point of envchain.
# my plans
changing context in go is futile at this point. that is set in stone. i'll stick to it in my projects.
but if i ever get a project where would need to use lot of indirect value passing then i might switch to envchains because it's easier to reason about and work with. it's compatible with context after all, see example.go in @/envchain.textar.
published on 2024-12-02
# mementomori: remember to die and look forward to it
this post has non-textual or interactive elements that were snipped from this backup page. see the full content at @/mementomori.html.
death is unavoidable. i don't worry too much about it. in fact i look forward to it. i find it intriguing. i'm curious to see what being dead will be like.
# unlikely options
so what happens after we die? i know about these main theories:
void or nothingness: i won't experience anything anymore. i like this option: if there's no experiencing, there's no feelings, i can't feel anything, so i can't feel bad about this either. i wouldn't mind this but i find this very improbable. i just find it too weird that there's nothing for very long, then there's some experience, then there's nothing again. doesn't make too much sense from a math perspective.
reincarnation: you get to live another life randomly. most life is quite hard and challenging so chances are this comes with suffering. sometimes it's your current life that is creating the suffering for your next life (imagine being one of those bad dictators that ruin many lives). i'm not too worried about this possibility either. i feel like a lucky person, i'm sure i would luck out a fun next life too. but again, i find this unlikely too for the same reason as above: it had to start sometime and end sometime. but is there void before and after? doesn't make sense from a math perspective. though it could be cyclical: the universe dies but then reborns in the same configuration and all experiences get replayed. but that means "something" exists from infinite past until infinite future and sounds too complex to me logically.
"the egg" from andy weir: https://www.galactanet.com/oneoff/theegg_mod.html. i quite like this story. it's an interesting mix between reincarnation and simulation. i find it improbable but nevertheless i wanted to mention it because i find it very well written.
# simulation
and the final option i know is "we live in a computer simulation like a video game". i find this the most palatable theory. it goes against the occam's razor, i know. but it gives a simple satisfying answer about the world and i can sleep better without my brain trying to make sense of how the universe works. i'm not claiming that we live in a simulation, i just say i assume we live in a simulation because that is for me the simplest answer that keeps my brain calm. i can imagine the universe as a video game and that's it. when we die the words "game over" appear and you get escorted to the outer world for a debrief. then we'll have a chance to play again or do some other outerwordly things.
i think simulation is the most popular theory for others too. i count most religions such as christianity into this category. it also has a concept of creator (god) and the concept of an outer world (heaven and hell). same thing but with mysticism because back in the middle ages we didn't have computers so the concept of a simulation was hard to imagine.
my crazy beliefs go even further: i also believe intelligent design. the evolution was slightly prodded in order to build humans as they are today. dinosaurs were eliminated because they were too messy and ugly or something like that. if you are going to create a world, surely you want to ensure it looks attractive to attract the gamers. so if we have creators, then i find highly likely that they do influence the world they created.
and being in simulation also explains weird rules like light's speed limit. distributed computing is hard. changes propagate slowly in the system otherwise it would be too expensive to simulate.
anyway, my point here is that if this is a game, then my own death is no big deal. it's just a game.
note that this is about making peace with my own mortality. as for others: we should try our best to prevent unwanted deaths and unwanted suffering so that the game remains fun for others. i have some additional thoughts on this in @/simulation.
# meaning
so why live at all? what's the meaning of life?
i think i've managed to find a specific meaning for myself. i have a "backlog" of tasks i want to finish (see the "stashed tasks" in @/task). most of it are just blog post ideas. i won't rest easy until i finish that backlog. if that is not empty on my deathbed then i will be annoyed.
i started writing my backlog of ideas into a file about 4 years ago. i started tracking the backlog's size last year. this is how the file's size changed over time:
[non-text content snipped]
it's still going up. but surely it will start going down sometime soon. there's only so many new novel ideas and sentences that can occur to me, right?
i'm hoping that it will become zero when i am around ~50 years old (about ~15 years from now on). then i have a couple years to work on my steam games backlog. then i just go to sleep and never wake up. if there's an option to go for assisted suicide once my body breaks down (e.g. going blind, needing wheelchair, alzheimer, etc) then i'd happily take that.
phrased in other words: for me the meaning of my life comes from working towards whatever goals i set for myself. as long as i have goals, i have a will to live.
# dead body
also i would be happy to donate my dead body to a hospital so that new doctors and nurses can practice on me. then my family doesn't need to deal with my funeral. i definitely want to avoid a formal funeral; i hate such events. unfortunately donating body to a hospital requires lot of bureaucracy to set up, i haven't done it yet. this task is in my backlog so it's all good, i'm sure i'll get to it some day. i'd be happy to provide even my living body for science as long as they can guarantee to reliably shut down my brain before cutting me up. that's probably even harder to arrange. i don't really get what's the big fuss about dead bodies in our society especially if the body's owner explicitly gives it away for free use for anything. i'd certainly give that permission because i wouldn't care at that point: i would be dead.
# memento
why write this post? it's my memento mori. i just wanted to remind myself that i will die, i shouldn't worry about it too much.
my eyesight is continuously getting worse. should i worry about this? should i stop my current practice of sitting in front of my computer 16h per day? should i instead go outside now and then? nah, let me have my comforts now. i don't have any other life shrinking bad habits such as smoking, overeating, overstressing. i can probably last until 50 in this form and after that age i can stop caring about living longer. so i don't need to worry too much about deteriorating health. no need to go to regular health checkups either. i prefer surprise young death than living too long artificially on various medicaments and then worrying too much and being a burden on the healthcare system and the people around me. humanity didn't have preventative healthcare 200 years ago so it's not that worse than what most people experienced in the past. again, this is my own preference, not saying others should have the same preference.
and when i finish my backlog, maybe i can waste another 10-20 years gaming if i'm truly lucky. for me that sounds like a comfortable way to await the death. i'm sure it will all go according to this plan and i will be totally happy on my deathbed! this post will age well!
published on 2024-12-09, last modified on 2024-12-15
# imview: use the imperative mood in code reviews
in @/codereview i explored how to phrase review comments. i recommended the form of "describe problem. suggest solution with a question." about a year ago i switched to the imperative form of "request. reason. qualifiers." and i love it.
before i explain why, here's the internet explaining why imperative mood is not the best in reviews:
i agree with most of the advice from these articles. be nice, make the comments always about the code, try to avoid "you", perhaps avoid even the royal we. the only difference is that i make my first sentence of each comment an imperative request akin to a title like in git commits.
another quick note: by "code review" i mean the type of reviews where the review happens before the change or pull request is merged in to the mainline. all review comment threads have to be closed or resolved before such merge can happen. the author of the change cannot merge the change until the reviewer is happy with the change. but it's also fine for the reviewer to pre-approve a pull request and expect the author to make a few additional minor changes to address any open reviewer requests and then merge without the author's further re-approval. this is fine in high-trust environments in exchange for team velocity.
# reviewer's perspective
one of the tips from the links above is to ask questions and that was my previous approach too. however forcing myself to make the first sentence into imperative mood makes me think much harder about the comment i am about to make and thus likely to improve its quality of the comment.
suppose there's a line in the change that i don't understand. if i'm lazy, i can just drop a "why is this needed?" comment and publish my review. job well done, right?
but forcing myself to phrase things in the form of a request would make me try to understand the line harder. and if i still don't understand it, i can make the generic "add an explanatory comment about this line. it isn't clear to me from the context." comment.
an imperative comment presents a step forward. it asks for an action to be made. the reviewer can still reject it but at least it doesn't feel like the code review is going in circles.
note that the imperative mood only applies to the first sentence only. afterwards and in subsequent discussion i'm nice and try to follow the guidance the above websites recommend.
# author's perspective
consider the first sentence as the title of the comment's thread. imperative mood happens to be the shortest form in english. people shouldn't get offended by titles. the feeling of rudeness can quickly go away if this becomes a well established style. the feeling of rudeness is also strongly diminished if the comment has good description and qualifier parts.
often i don't want to hear the life story of the reviewer. i just want to hear what they want so that i can get the code merged and go home. them asking questions and being nice just comes across as passive aggressive and means more work on my side. so just start out with the request and the life story can come afterwards. it's similar to the common writing guidance which suggests to start with the conclusion.
example from a different workplace: i'm pretty sure nurses won't get offended when during an operation the surgeon just barfs "status?" instead of "could you please tell me the heartbeat rate? it will help me decide whether i can begin the operation", or just "scalpel!" instead of "could you please hand me over the scalpel? i would like to make an incision".
there are specific formal settings where it should be okay to omit pleasantries. for surgeons it is the operating table, for programmers it could be the code review thread titles (the first sentence of the code review threads). and people can quickly get used to it.
# annoying questions
take a look at the examples of a "nice review" from https://archive.is/LL0h4 ("exactly what to say in code reviews" from "high growth engineer"). let me quote just the first 3 examples, the rest are the same style:
i find such feedback annoying. such feedback is very easy to make, takes 5 seconds to come up with them but might take the author hours to answer. these questions stop progress.
the same feedback in imperative style:
these comments are much harder to make by the reviewer. the reviewer actually has to evaluate the options and make a recommendation based on their research. then the author can either accept or reject the recommendation but doesn't need to go into full research mode for an off-hand comment.
forcing the reviewer think hard is why the imperative style makes such comments much higher quality even if they can come off a bit rude-ish.
# good questions
questions that don't dump more work on the author are fine though. those are the ones where you try to confirm your understanding of the change.
a sole "why?" is a bad question because the author will need to type a lot and doesn't even know which part the reviewer doesn't understand. "is this needed because x?" is a simple yes/no question. here the reviewer demonstrates some understanding and the author can give a single word confirmation or give a very specific response to the misunderstanding.
these type of questions also require that the reviewer invests some time to understand the code and thus the question doesn't feel cheap.
but don't go overboard. one might tempted to request changes in the form of a question when the reviewer is truly unsure about the request themselves. "should we add caching here?".
no. my rule says that such a thing must be added as an imperative request: "add caching here." that sounds weird to write when unsure, right? the imperative mood forces me to think hard, perhaps research to understand whether that might make sense at all. and if still unsure then add an "i'm unsure about thist though" qualifier at the end to mark the unsureness: "add caching here. i think 99% of the people just look at the frontpage. but i'm not sure about this, thoughts?".
# concerns
suppose the the reviewer sees code that might be incorrect but not sure how the correct code should look like. there are creative ways to raise such concerns imperatively. e.g. "add a unittest for this piece of code. x returns y which sounds wrong." or "document this section of the code. it's a bit unclear how this works."
what if the reviewer is not sure what to suggests? the reviewer should always try to come up with an approach that addresses their concern even if the thing they come up with is not the best. they should request that with the qualifier that it might not be the best approach: "add caching here. 99% of people look at the frontpage, that should be a cheap request. not sure caching is the best approach though. thoughts?". the reviewer can suggest multiple options if they can come up with them: "add caching here to keep the frontpage requests cheap. or add a todo comment to handle this later one way or another. nevermind if you believe the load won't be a problem".
if the reviewer truly can't come up with a solution then they can omit the imperative request part and start with the concern but then explicitly acknowledge the missing request: "this makes the pageload time more expensive. i thought a bit about this but i don't see an easy way to address this. any ideas or a reason why we shouldn't be concerned about this?".
or if the reviewer is not sure if the concern applies or not then just omit voicing the concern at all. the review will have less noise. don't block people unnecessarily.
even if the reviewer wants to reject the code change, they should explicitly explain their concern but still provide a way forward for the author: "could you write a short one page document about this feature first? i have several concerns that i believe would be easier to hash out in a document". here i'm using the nicer "could you?" form of request here because this request is not aimed at the code but to the person.
# optionality
add justification for the request where it's not obvious. it makes it easier for the author to judge how important the request is. it will make rejecting the request easier. the author can explain why the reason or concern doesn't apply.
lean on making the requests optional especially for stuff that's easy to change later such as implementation details. if a change makes the codebase better, even if not the highest quality, they should be accepted. err on the side of team velocity rather than perfectionism. there are cases where perfectionism makes sense such as in interfaces or in widely used libraries but majority of the codebases aren't that.
learn to distinguish between one way and two way doors. jeff bezos seem to have popularized this metaphor. from a random article on the topic:
Some decisions are consequential and irreversible or nearly irreversible -- one-way doors -- and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don't like what you see on the other side, you can't get back to where you were before. We can call these Type 1 decisions.
But most decisions aren't like that -- they are changeable, reversible -- they're two-way doors. If you've made a sub-optimal Type 2 decision, you don't have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.
As organizations get larger, there seems to be a tendency to use the heavyweight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention. We'll have to figure out how to fight that tendency.
most things in code are two way doors. even if you are absolutely sure about something, make the request optional. let people make mistakes. people learn more from the mistakes.
this assumes the person will be around to fix their mistake, e.g. teammates. being more strict on external, one-off contributions makes sense though.
even for stuff like style guide violations where the rules are very clear. it might be fine to let a few of them pass if the person is very opposed to some rules. maybe they are right about the particular rule so let them experiment. giving people freedom improves morale, they will be more productive over long term in exchange.
also if the review tool allows pre-approving a change then do that even if there are many open nits. of course that doesn't apply if there are concerns about the change or another round of review is warranted or based on prior experience the author doesn't respect the suggestions (e.g. ignores them without any response).
# qualifiers
mark the request with your expectations. this is super important for optional requests. giving a reason already implies sort of conditionality but it's better to make it explicit.
for more complex requests i often put a "thoughts?" note to the end to signal that i'm open for discussion about the request. but often add "nevermind if that's not the case" to signal that my assumption might be wrong. i also use "fine either way though" to mark that i don't really care about whether the request is applied or not. and many similar variants, all at the end.
there are other conventions too which put such qualifiers to the beginning:
i haven't used them yet but i think those are fine too.
# other title contexts
there are other places where the imperative mood is a good fit. one example is the first line of the git commit messages. this can be also seen as the title for the commits.
but this works great for bug and issue titles too! nowadays i would file "frobnicator: fix crash when x" instead of "frobnicator crashes when x". it was a bit awkward for some titles but i got better with experience and now my issues are much clearer just from looking at the title. the "projectname:" prefix style is also super useful for grouping issues solely based on the title (also see @/titles).
i try using the imperative mood even for my blog post subtitles. it keeps things short and to the point.
# feedback in general
these are just guidelines in general. better form might apply in some cases. e.g. simply quoting a rule in a code-style or readability review could be enough: "all top-level, exported names should have doc comments (https://go.dev/wiki/CodeReviewComments#doc-comments)". the imperative sentence could be omitted there.
some people might be overly sensitive and strongly prefer pleasantries (the opposite of https://www.lesswrong.com/tag/crockers-rules apply to them). well, just use whatever style they need to keep the review exchange efficient. this is not the hill to die on.
(sidenote: if your personality is still flexible then i highly recommend committing to https://www.lesswrong.com/tag/crockers-rules. life is so much easier when you don't stress about the exact words other people communicate with.)
these ideas go further than code review. all feedback should be imperative. the "just asking questions" does make sense in exploratory or socratic discussions but not in feedback.
but in non-formal environments such as online discussions or just normal everyday discussions more tact is needed. "could you pass me the salt?" works well for simple requests. or "i think asking more questions in meetings would demonstrate more leadership" could be another way to phrase a feedback in a semi-imperative way. both forms include a specific action that's requested so it ensures that the requester gave it a thought and isn't "just asking questions".
(sidenote: i generally try to avoid using the word "please" in my communication. the "could you" is already kind enough, there's not much point making my sentences even longer. in fact adding it makes the sentence feel more passive aggressive to me.)
published on 2024-12-16