Sunday, December 14, 2025

Will you invite Mark Zuckerberg to your wedding?

TLDR; this is just a critique on the default non-private design of Whatsapp, a famous messaging app. Many apps adopt this design principle. We have to be critical of such design and the people behind those designs.

You own a sim belonging to a particular network. The sim is capable of storing a few mobile numbers. However, these days people have their contacts synced up in Google. Having google contacts makes it easy to restore contacts onto a new phone. This is relatively convenient for many consumers.

Most people in India prefer using a single mobile sim for all purposes. Culturally this dates back to the days when people had landline telephones. Each house would probably have only one phone line and a single phone number. It was enough; it served its purpose. For our various domestic needs at the time, we'd often call an electrician, plumber, etc. Back in those days we stored their numbers in a diary. The wired telephones of that time did not have a capability to store phone numbers on it.

Cut to the modern day, traditional SIMs have an upper limit as to how much numbers they can store. These limits are far superseded by what can be stored on Google Contacts. 

Now whatsapp is a modern messaging app which the company called as Meta, formerly Facebook, has acquired a long while back. A long while back it has introduced the system of showing a status. 

A WhatsApp Status is a feature in WhatsApp that lets you share updates with your contacts in a certain format. Primarily the section below will outline it

πŸ“Œ Key Details of WhatsApp Status

  • Temporary posts: A status update lasts for 24 hours before disappearing automatically.
  • Content types: You can post text, photos, videos, GIFs, or links.
  • Privacy controls: You decide who can see your status (all contacts, selected contacts, or exclude specific ones).
  • End-to-end encryption: Like chats, statuses are protected so only the people you choose can view them.
  • Replies: Friends can reply directly to your status, which starts a private chat.

🎯 Purpose

  • Share moments (like a trip photo or a thought of the day).
  • Broadcast announcements (e.g., “Closed today” for a business).
  • Express moods or creativity with text, emojis, and stickers.

⚡ Example

  • You post: “Enjoying the sunset πŸŒ…” with a photo.
  • Your contacts see it in the Status tab.
  • After 24 hours, it disappears automatically.

So, a WhatsApp Status is essentially your short-lived story or update that your contacts can view for a day.

It is a given that Whatsapp has privacy controls which enable controlling who you are going to share your update with. But people do not realize that out-of-the-box, your status updates go to everyone in your contacts who also use whatsapp.

Why would you want to share your "personal" status update with your plumber/electrician/taxi driver/etc? These are people not directly connected with most aspects of your life. Would you invite these people to your wedding, or, your children's wedding? Perhaps there are a few people who might invite such people to their marriages or important life events. But statistically speaking they don't make the majority of people in urban India. (In Indian villages, the story is a little different). 

What do we do about it? Are we not meant to store contacts of local service providers in our phone contacts? Are they meant to be only reached via internet search? Are we supposed to have a different mobile number and phone only meant to store the number of local service providers?

What does it say about the company or the person who has designed such an app? Does he care about your privacy? Will you invite such a person to your family wedding? The creators or owners of such apps are considered celebrities. There is a certain social allure to invite a celebrity to your personal life event. It might give you a relative social flair if you get to mention that a celebrity is attending your marriage. To me, personally that idea is insane. If you are likely to show such a behavior, then you might surely will get labelled as a status-seeking social and selfish animal with no concern to protect the society you live in.

On a relative positive vein of thought, perhaps one could argue that "that design" is actually meant to make you "more" privacy conscious. It is what is required by the ever-changing modern times. I don't agree. Perhaps life might go this route...One day our future generations should be thought in schools itself to be privacy conscious. But the kind of privacy issues in the modern day were much different a few centuries back. Case-in-point - you should probably read the classic by T.H called "The Mayor of Casterbridge". We were not bothered by privacy those days. Why should we now?

Saturday, September 13, 2025

[Rant] A gmail google takeout frustration

 ⚠️ The intention here is not to point fingers at products or an organization. 

TLDR; I tried to "cleanse" my gmail inbox, and, gave up. But I discovered some pythonic support related things along the way. Respect for that. πŸ™

If you are someone from my generation, I am sure statistically speaking there must be around 50k unread emails in your inbox. I do not want to explain why that is so. It is a fun exercise to try to understand why the dire state of such email inboxes. But in this blog post, I guess it is enough to sum it up by stating that most of us are victims of capitalism and the communication revolution.

I set out on the journey of reducing my Google One storage foot print. Somewhere in one of Google One's web pages, we can see product wise usage of the storage. (Between Photos, Gmail, Drive, etc). Gmail wasn't the culprit here. Gmail had some sizable chunk of data, but, Photos was taking 7x more than that. 

Therefore I figured out what I could cull from my gmail inbox. There are a lot of things I need to keep - like bank statements, receipts, personal communications with people in my friends circle, etc. There are a lot of things I can discard - all of those facebook notification emails, quora email digests, etc.

At this stage, there is no point in wondering why the inbox is flooded with these. At one point I was active into social media, and, therefore I never considered those emails as junk. But now they are! Oh, how the times have changed!!

I assumed that there could be a way to estimate what I am about to delete, and, then actually delete it. In other words, I could say that I wanted to delete which I deemed unnecessary, and, not something accidentally.

My plan was to do a kind of data analysis on the data. In simpler words, find out who is spamming me. Then gather the data, and, write a google app scripts to delete the emails. But I was in for a shock. Highlighting two important points below:

1. Many mail items had received time, but they are not timezone normalized. In some cases a few the formats itself were varying. There is no reason why one should expect data that way. 

2. You cannot trace a mail item to a thread. Gmail servers organize emails as conversation threads. Each thread has a unique identifier. I know this since I have played around gmail inboxes via google app scripts. There looks to be an identifier but there is no correlation with what is on gmail servers. 

These two reasons make it absolutely impossible for me to decide what to delete before I delete.

I used jupyter notebooks and pandas to help we out with this exercise. I was surprised that python had built-in support for working with mbox files. My choice to use a programming language such as python is personal. The language and its ecosystem is quite mature towards data analysis. I am sure others language ecosystems exist. However, after saying all this I am not trying to evangelize usage of some product or another. 

It has been more than 10+ years single google takeout released. It is natural for a bloke like to be shocked. Why is there no support the way you want it to be. I think it is probably because it takes efforts to design a archival system that supports your email server. And my use case happens to be niche. People may think more about backing up their data. Not find a convoluted way to "cleanse" it. 

However, I am hardly discouraging others to walk this path. In fact I encourage it. Who knows, a fresh set of eyes might discover something I could not. Just be happy to let people know if it comes to that.

Friday, June 13, 2025

My 2 paisa (cents) on digipin

It is a geo-coded addressing framework. However, to appreciate the concept and the acceptance of the digipin, you need to contrast it with the concept of India Post's PINCODE. I am purposefully omitting all that in this post. 

I feel this addressing system helps a lot of "machine readable" systems. This term is a very broad term; too esoteric for the common man to understand. Ultimately, I think "machine readability" is what enables common man to shop online, order food, get turn-by-turn directions while navigation, etc. It is the whole reason why you are able to read this blog post today. However, let's not go there either.

Again, the next few lines are perhaps what a common man would not appreciate all that much...the crux of this addressing framework is a system for encoding/decoding gps coordinates mainly latitude and longitude. Somebody or a handful of people, thought of it, and, fought to make it relevant. Kudos for that.

Every Indian would want to tout this as a novel idea; its relatively novel. There is already a manifestation of an addressing framework in Google Maps. (Good luck finding what it is, much less using it). There are many others - like What3Words. (Car enthusiasts might know). You can find more if you wikipedia geocoding.

A common man can find more information on digipin. (If he or she is ⚠️determined). I found a github repo which gives a neat technical documentation. There is also DHRUVA. It is an acronym. That document details the idea behind digipin. All of these can be found of the website of India Post after a little internet search.

Final Note

All this is good innovation. But don't forget we are still a savage species. Some individuals in this species can give birth to increase its relative population. The mechanism concerned has been the same for millions of years. And it is going to be the same for millions more. Of course, today there are technologies like IVF, and, c-sections, etc. (But that is beside the point). 

A subset of that population gets to witness these kinds of innovations over their lifetimes. In one sense, that is a blessing. But forget that what "these individuals" did over the years to get there. They have lied, rescued, murdered, fought, escaped, ate food, polluted, cheated, innovated, raped, invented, travelled, fake news-ed, voted, blogged, judged, punished, etc. They've done a myriad of things both good and bad.

Realize that humanity even today still continues to do these same things.

And oh yes, there is digipin for places in the Indian/Arabian oceans too.🌊

Wednesday, March 20, 2024

Dammit root-check (a yeoman bug saga)

 TLDR; Discovered a bug in yeoman, and, thought it was a bug for a long time. Until I discovered it was a module called root-check. And the bug wasn't actually a bug!

This is a rant. My previous blog post, does add a little context here. But it is not "absolutely necessary" that people must read that one first before continuing here. This is a post about a bug I recently faced with yeoman. 

For those in the dark, Yeoman is a program to scaffold projects. Mostly nodejs. But you are not limited to nodejs projects; you can scaffold any kind of project. And, you could make it do some simple tasks. So when I figured out how to make cloud-init vmdks, I thought it needed a bit of automation, and, went ahead and made a yeoman generator for the same. 

So this involves writing a custom generator. 

A generator in yeoman parlance, is an encapsulation of a folder structure, files (that can be templated),  and, scaffolding logic. So when a generator is run in your working folder, it is usually for a specific purpose, like for e.g., a scaffold react app, or, an angular, or, a loopback 4 application, etc, the generator creates the necessary folder structure, assets, etc, (including adding of license texts), to get you quickly started with the respective development process.

The yeoman cli (command-line interface) is a nodejs program, that helps humans download generators from the internet (i.e. generators that meet their project needs), discover generators already downloaded/installed on their machine, and, executing them. There are other parts involved in the picture; together they lend themselves to be a kind of an ecosystem for scaffolding projects. This is in the same spirit of tools like Maven, or, NuGet.

I will narrate my recent yeoman experience. The goal is not to ridicule the project maintainers. If you asked someone what opensource software development looked like...this blog post might give you a perspective. Further, I am not imposing that the reader be familiar about the tool. However, I make no attempt at explaining specifics since I believe they are self-explainable. 

The first hiccup was the cli and the generator-generator. Today, from npm you download the following:

yo@5.0.0
generator-generator@5.1.0

And, when you type yo generator (to scaffold a custom generator project), the cli errors! But, once you google enough, you will find that the simple fix for this problem is to downgrade yo, i.e. install yo@4.3.1. With this I was able to progress authoring my generator. But please note this as issue #1.

I knew what the generator should do. But when it specifically came to do a linux filesystem mount, things started to break. I didn't know why it failed! I ensured that I ran in an rooted terminal and all. I wrote some isolated tests and confirmed that there is actually no fault with code I wrote. And, yet its failure to work through a yeoman cli invocation escaped me. Make note of this as issue #2. 

And, the next thing I did is, raised an issue on github. This issue post contains examples of what I am trying to accomplish, and, an isolated example which proved that the file mount was working even as expected when running via root. (You will also find a gist in that post). 

There was an itch to "tell the world" first; I went around forums and asked people if they would react on the github issue. It is unethical to do this, but people do it anyway. However, my aim was to get other people somehow confirm that they were able to reproduce the bug, and then perhaps ask them nicely to react on that issue!

Those attempts didn't work anyway. So, there was no choice, but to read the source code. I wondered if this could be a bug with nodejs itself?! On linux?! Can I pride myself on discovering a nodejs bug?!! All the source code research did was help me make a better isolate test script. It modeled what happened inside the yeoman cli. And to my surprise even that test seemed to perform the file-mount; whereas when yeoman tried to run my generator, the linux file mount failed! I was flabbergasted. Here is the link to that isolated example: https://gist.github.com/deostroll/b69f6868c99f97bccb14bf1b848c7bbf#file-index-js

For a long time, I thought the issue could be #1. Am I working with outdated components?! I made the next decision to find out the updated components I have to use. But I couldn't find any in those standard official repos which npm pointed to. This made we wonder about OSS experience. Now I am really at the mercy of the maintainers, or I am on my own to fix the problem. Because, as of that moment, my issue on github, were merely bytes of data stored on github's database residing in a datacenter somewhere in the world. Would someone ever respond as to why the bug was so? 

Lingering around, trying to find out what could be the updated component versions I could work with, I discover a few other facts. Many OSS in javascript and web development in general are in some kind of a movement to embrace new standards - like async/await, decorators, etc. Some of these standards are not formalized into the language itself. For e.g. decorators, is not yet imbibed into the ECMAScript standard yet. It is still in an experimental phase, however, because of typescript, developers can enjoy using them in their code bases. So, this is what is happening in our software landscape today - a kind of migration of code patterns.

Most OSSs, have nothing new to bring to the table for developers. But they would do this migration anyway for several reasons. Some of them do it well; that is their end-developers are not affected. Everything works like before for them. For others, not so much. I seem to be stuck in this branch of life. Yeoman is migrating. They are even namespacing their projects over at npm, in an effort to reinvent the wheel. This leaves developers like me in the shadow on how to fix things. But make a mental note of my actual position. I am not someone deeply involved with this project; I have not made any contributions. Nor do I go about doing code reviews, or respond to other issues on their github issues page. I am using the so-called software after 10 years, and I find an issue. I post it on their issues page with full hope that someone would quickly respond. And then, I realize about this great migration, and realize my bug may never get the response I am hoping for. 

Ultimately what gave me the clue was the second version of an isolated test. If I plug my generator into my yeoman environment properly and run in an elevated (or rooted) terminal, my file mount succeeds. But, the same thing via the yeoman cli fails, still! At this moment, there was still no answer. 

And then, one of the maintainers responded to my issue post. His response was that I was working with outdated components; they were more than 5 years old! I don't know why the maintainer actually avoided the issue. That is when I actually "read" what I posted. Compare the 1st isolated test and the 2nd isolated test. The second one was more explainable. Perhaps, there is a better probability that a maintainer would understand the underlying issue IF I had posted that one. I wondered to myself what was I smoking when I was writing the issue post like that. πŸ€” 

So, what has ultimately made me figure this all out? I happened to capture the error code for the (linux) mount program, and googled it. It seems that this error only happens when the (linux) mount command runs as a non-root user. I am not running as a non-root user! Now does yeoman have a thing for running as root...? IT DOES. The cli program has a module called root-check and if invoked anywhere in code, and if the terminal is rooted, it downgrades the process to a non-root one. And in my case, there was no other indication of this other than the failure of the mount command! 

The damn bug was actually a feature! 🀦‍♂️

A few minutes prior to finding out the answer to this problem, I came across this issue post on the repository of root-check module. It is titled rather comically. The OP expresses his astonishment/angst. And he also suggested that this module should have an environment variable to control or toggle the root-check behavior. And the maintainer also provided an apt reply. But somehow after all this experience, and, reading that OP post, I could understand his sentiment, and, wanted what he actually wanted.

Thursday, February 22, 2024

So I think I figured cloud-init!

Previously I wrote about cloud-init as part of something a cloud service provider might offer. (Or perhaps I simply called it user-data; I don't accurately recall) Recently I have learned that it is something provisioned by the operating system itself. (Especially linux based servers). I do not know the history of the os feature as such, but if you are someone who plays around with Oracle Virtualbox for provisioning VMs then you need to know how to benefit from cloud-init.

I had to go through several webpages and perform several experiments to get it right. So in this post I will collate the steps for you to get easily started.

  1. (Step 1) TLDR: Grab the vmdk file. Create a blank VM (no disk) and attach the disk you have downloaded.

    You need to grab a cloud image. Every distro has it. Its a vmdk file, that is it is a virtual box disk file. (Not the traditional iso). If you created a blank VM and attached this disk, and, powered up the VM, the os will boot and display the login prompt. But since you don't know the credentials, you cannot proceed to work with the vm instance any further. 

  2. Compose the cloud-init disk. Multiple steps required here. Hence find the dedicated section below

  3. Attach the cloud-init disk to the above, and, boot.
After the VM boots, you can login with the user profile you specified in step #2. It will have all the necessary artifacts you have specified in the same step.

The benefit is that now you have a VM that is configured with all the necessary software you want to further your exploration. 

Goodbye secure practicesπŸ‘‹πŸ«‘


Most cloud-init tutorials will talk about creating different users, public key based ssh authentication, configuring sudo to not prompt for password, etc. I am assuming you are novice to the whole concept of cloud-init, and, you are working in some kind of personal self-exploratory capacity. 

Bottom line is that those secure practices are meant for professionals or experts. I am assuming most readers reading this post are trying to become one, and, they understand that they should not perform these steps for their professional work.

Configuring the cloud-init disk


Most of the information here is obtained from this thread: https://superuser.com/questions/827977/use-cloud-init-with-virtualbox/853957#853957

I will reproduce the commands in a slightly different way below. Now is probably a good time to check out some cloud-init docs and tutorial videos. It should give you a precursor to the stuff that I write, for e.g. in the user-data or meta-data file below. The tutorials you find online, are vastly different from what you are going to go through out here.

0. What am I doing here?


I am creating an instance with nodejs pre-installed. I have started off with an ubuntu cloud image. So when you log-in you should theoretically work with node in a out-of-box style. You log-in to the os with username/password: ubuntu/ubuntu. The hostname of the provisioned machine is osbox03. All this is done by cloud-init. The process would download nodejs and make its binaries globally available. However, for cloud-init to work in this manner, we need to create a disk with a certain label, copy some files over to that disk; files which have the necessary cloud-init configuration. This is outlined in one or more of the steps below. In the end you will also find the link to a gist which has all the data and commands you need to type.

1. create a user-data file:


#cloud-config
users:
  - default

ssh_pwauth: true
chpasswd: { expire: false }
preserve_hostname: False
hostname: osbox03
runcmd:
  - [ ls, -l, / ]
  - [ sh, -xc, "echo $(date) ': hello world!'" ]
  - [ sh, -c, echo "=========hello world=========" ]
  - [ mkdir, "/home/ubuntu/nodejs" ]
  - [ wget, https://nodejs.org/dist/v20.11.1/node-v20.11.1-linux-x64.tar.xz, -O, /home/ubuntu/nodejs/node-v20.11.1-linux-x64.tar.xz ]
  - [ tar, xvf, /home/ubuntu/nodejs/node-v20.11.1-linux-x64.tar.xz, -C, /home/ubuntu/nodejs/ ]
  - [ ln, -s, /home/ubuntu/nodejs/node-v20.11.1-linux-x64/bin/node, /bin/node ]
  - [ ln, -s, /home/ubuntu/nodejs/node-v20.11.1-linux-x64/bin/npx, /bin/npx ]
  - [ ln, -s, /home/ubuntu/nodejs/node-v20.11.1-linux-x64/bin/npm, /bin/npm ]
  - [ rm, /home/ubuntu/nodejs/node-v20.11.1-linux-x64.tar.xz ]

system_info:
  default_user:
    name: ubuntu
    plain_text_passwd: 'ubuntu'
    shell: /bin/bash
    lock_passwd: false
    gecos: ubuntu user

2. Create meta-data file:


instance-id: my-instance-1

3. Create the cloud-init disk:

Follow these steps:
# Create empty virtual hard drive file
dd if=/dev/zero of=config.img bs=1 count=0 seek=2M

# put correct filesystem and disk label on
mkfs.vfat -n cidata config.img

# mount it somewhere so you can put the config data on
sudo mount config.img /mnt

Copy the user-data and meta-data files to /mnt, and, then unmount:
sudo cp user-data meta-data /mnt
sudo umount /mnt
config.img is hydrated with the cloud-init setup. We need to convert the file with img extension to vmdk extension.
  
sudo apt-get install qemu-kvm
qemu-img convert -O vmdk  config.img config.vmdk

Now attach config.vmdk to the VM created in step #1, and power it up.

Now after you have powered up your VM, you can physically log-in to the terminal. You can quickly inspect the /home/ubuntu/nodejs folder. If contents don't exist, you may have to wait a while for cloud-init to conclude its work. You can run the following commands to inspect the cloud-init output:

cat /var/log/cloud-init-output.log

If anything fails you will learn about it through the above output.And if everything works out you can type the following and self-confirm everything works:

node  --version && npm --version

An alternative command you can run to assess the status of cloud-init is below:

cloud-init status

Thaks all folks! 

Gist: https://gist.github.com/deostroll/bcb18a5d25f533b4aad3f27566219bf9

Saturday, February 12, 2022

Copying files from host to container and vice versa

TLDR; You can use base64 encoding/decoding to copy files

Here is a simple trick to deal with the menace of getting files from the host to container or vice versa. This can even work with VMs running inside your host. We usually use ssh to connect to VMs or containers. When working with the former ssh itself has solutions (like scp) to do the same thing. But it can still be done using the technique I am about to explain.

So in most popular docker images (linux based) there will always be a base64 utility program. (Or it can be installed via any package manager utility like apt or yum). The same utility program exists on several other popular operating systems out-of-the-box. And even if it not there, (especially on Windows), there are other applications that provide the same utility program. For e.g. git bash command line.

The steps are simple. Say you have a file called hello.txt with some text.

1. Run the base64 utility to encode the file.
$ base64 hello.txt
2. Copy the output to your clipboard.
3. In your target container instance, in an exec session, you need to do the following:
$ echo <base64_text> | base64 --decode - > hello.txt

The caveats


1. The container image should be linux based, and, have a base64 utility installed. If not installed there must be a script-only solution...will share when I find it.
2. In this example, I based the base64 binary based on what I experienced. Some binaries may have a different set of arguments. Please consult the help docs
3. You need to be able to write on the container file system.

NB: #3 is the usual bummer. But if you are able to write on the container file system, why not just use exec sessions⁉️

So why does it work?


I do not have a solid answer. You can youtube about it to understand how the algo works. All I know is that the encoding process is based on 64 ascii characters. All the 64 characters can be typed using a simple english keyboard (US layout).

Incidentally base64 is also a very popular with data transmission over the internet. Because it is mostly ascii based. I learned this the hard way when I explored SMTP. protocol and servers

So any digital sequence of bytes can be encoded, transferred and, then reformed using base64. That is the underlying principle of why this works.

I conclude this post with an interesting exercise. Suppose there is a file with content inside it. This file also has some set of file permissions. How do you get a file across (to a container or elsewhere) with the same permission set?

Saturday, February 5, 2022

Cloud-init or userdata

TLDR; I explored the cloud-init/userdata. It is a time-saving feature, mostly used when creating VMs in the cloud. (It is perhaps available for other distros as well).

To start off I would have to say that cloud-init is mostly a concept/feature associated with ubuntu server distros. I have been installing these distros a lot and have seen a lot of console output with this term. But I never really understood much less appreciated its significance, until I explored cloud services such as AWS, DigitalOcean, etc. On cloud services, this feature is commonly referred to as "userdata". (Not all cloud services use that term, or probably have this feature).

As the name suggests it is meant to initialize your VM instance to the required state for you to go about your business - exploring or running software including:

1. Updating software
2. Installing favorite tools - e.g ifconfig in linux
3. Creating user account, setting up hostname, etc...

That is all you need to know. I will give you a quick info about how I came to experience this feature.

So I was exploring SSDP protocol. In short, it is a way programs can find other programs on the network. There are many use cases for this kind of feature in a program. Most commonly it forms an essential part of similar programs trying to decide among themselves who should be a leader or who should be a slave.

There was a python package which implemented this protocol - ssdpy. This is a python package; you have to install this manually. On ubuntu server (v20.04) python and pip doesn't come ready-out-of-the-box. You have to install them manually after running your apt updates/upgrades.

A developer exploring cloud services would normally find surprises such as the ones mentioned above. But after all that is done, you explore the ssdpy program. This program has CLI commands to start a SSDP server which can publish some service (usually on the same host machine), and, a discovery program that can discover this. You run the discovery program from a different VM in the same network.

However, SSDP doesn't work in the cloud. To test, I spun up two VMs, came up with the above conclusion and then quickly destroyed the VMs so I don't incur operational costs. But then, I thought about testing it with a different set of options.

So basically all those commands required to setup ssdpy needed to be run, on two VM instances. It seemed apt to use the "userdata" feature here. Further, along with this userdata initialization, I also downloaded some bash scripts from my github gist intended to send notifications to my mobile phone when the VM was "ready".

Final verdict about SSDP is still the same - SSDP doesn't work in the cloud. I am not going to answer why that is so...This post was a short intro into "userdata", or cloud-init.