My son has shown enough interest in learning some serious programming apart from his after-school Scratch Jr clubs. I gave it some thinking and realize a good way to have a proper feeling of real world coding (not via the attractive graphical Scratch UI) has been around for decades: LOGO.
It is a drawing system where you give primitive instructions about a moving cursor (a pen), like move forward, move backward, turn left, turn right, pencil up, pencil down. It brought me back to middle school days where a teacher showed us how to do it and we go to labs to finish all kinds of graphs.
Then after a short research it turns out the best environment for that is readily available in Python which is a very sweet combination.
After upgrading the Postgres database in one of my application’s test environment, the Django server code stopped working because the permission required to do database migration is missing.
I didn’t investigate the exact change in the defaults. But as a quickfix, we just need to grant the permission to the owner user on the default “public” schema, like this:
postgres=# \c your_data_base
emblem=# GRANT ALL ON SCHEMA public TO your_user_name;
A full list of commands I often use for setting up a test database:
CREATE DATABASE mydb;
CREATE USER myuser WITH ENCRYPTED PASSWORD 'mypass';
GRANT ALL PRIVILEGES ON DATABASE mydb TO myuser;
\c mydb;
GRANT ALL ON SCHEMA public TO myuser;
I wanted to have a fast boot virtual machine image to do a quick test and run containers. So I tried to trim down a cloud image from Debian.
The end result is a VM that can boot from QEMU command line into login prompt with roughly one second.
I could get below 1 second by not installing docker but since that is a common tool and demostrates a featureful environment rather than being absolute minimalistic.
The method I used is mainly inspecting dmesg and systemd-analyze outputs, and find components that take long time in the critical chain.
Here are the list of things I changed:
Kernel rebuilt from upstream tree. Only include necessary modules as Y, and avoid any expensive probing or detecting such as raid6, ata, mouse.
Comment out optional /etc/fstab entries
Disable a bunch of systemd services after checking out systemd-analyze critical-chain, including /lib/systemd/system/systemd-binfmt.service
I used dhcpcd service in the VM which is not the fastest to configure the guest network. Using a static config will likely allow faster ssh access.
I have been using virt-customize for years in my q-script. It works while if the environment is familiar and friendly but can go wrong for a number of reasons because of the complexity.
I need something simpler so I’m looking at implementing a subcommand in q-script that is not callling virt-customize command but spawn a VM then ssh into it to do the customization.
Quite similar to the -c argument of its qemu subcommand.
But it turns out the initial boot is not easy which I believe is also the reason why virt-customize has had a lot of complexity.
But I still want to take a different approach. Instead of relying on /boot/vmlinuz and other things from the host, q-script will rely on a simple kernel image and initrd.
The kernel can be build with the kvm_guest.config and the initrd can be generated like this:
So I need to prepare a directory first. Over the last couple years, I have built a few different flavours already. So I am not afraid to do it again for q-script. The idea is simply to have a /init script that has shebang and executable bit, which can be an ash script. All it has to do is mounting /proc, /sys, /dev then figure out the rootfs layout of the image in question.
I will add a CI since q-script is already hosted on GitLab.com, so that the images can be fetched whenever q-script customize is invoked.
When working with a technical problem or product, when to worry about knowing all the details and when not to? I asked myself this question, and tried to answer it myself. Here are some of my thoughts.
There is no black and white, but it’s safe to say you haven’t learnd enough until you know a lot of details. But just knowing is not mastering. Knowings become skills only when they are in your muscle, bone and gene. Based on this assertion, I would say a good way is, as a beginner of anything, to be as focused as possible, i.e. dive deep. Don’t just talk shallow concepts, stop at the surface. Put it simply, neglecting details means you’d inevitably miss fundamentals, or things that really matter in the long run.
We can learn a Chinese dish from a recipe and replicate the taste fairly well through using precise amount of ingredients and way of cooking – how much heat, how much time, etc. But just doing this mechanically, not knowing the subtle difference if adding salt a little more or a little less, cannot teach you the art of using salt. The same to any other ingredient, which, if you did study and experiment carefully one by one, rather than always treating the recipe as a whole, would greatly help your cooking in many other dishes.
Once you are very familiar with something, then it’s also good to try to jump out from the details, abstract them away under a contract or convention, then focus on the abstration. Because the abstraction and the model are the essense of a problem. Understanding the core principle is much more useful than knowing one answer to one problem. Only ever focusing on details and specifics, while missing the big picture, stops you from learning more. This is why many people find it hard to break through their ability level, because they are repeatedly solving specific problems, such as, implementing APIs over and over again, without even trying to identify and understand where the next level is.
This can happen easily if all you look at is part of the whole picture – just one leaf on a tree, or just one tree of the whole forest. Don’t forget, even when you have seen the whole forest, there are mountains and oceans. There is always a bigger picture, so you need to know when to lift yourself out from the problem at hand, and stand far to watch what you’ve been doing in a bigger context.
Working hard on one problem in a great depth isn’t the opposite of having a broad view. They live together very well. However, one better approach, in my opinion, is usually to go deep before broad. It’s easier, more natual, and guarantees a better reward to the efforts.
You don’t have to find out all the details whenever you go into a new area or topic. Because, that way, you’ll only be able to explore a few specific topics – the human kind has accumulated too much over hundreds and thousands of years, the existing information and knowledge is massive. Breadth is only possible when you know how to limit the depth.
So we have to constantly make choices, evaluate and decide when to stop. Should you continue digging? Should you hand the rest to other experts? Should you just re-use what is already there without knowing absolutely everything under the hood?
Everyone knows it’s reasonable to just learn to drive without knowing how internal combustion engines work. The same applies to many things in life, research and career. Applied technologies are all about making things useful and reliable by many people who are not the scientists of the underlying field. Someone has to know the details, but far more people just need to know the usage, benefits and perhaps some idea about the technology internals, for instance, your engine typically needs a spark plug if it takes petrol.
It’s not most people’s job to worry about technologies, unless it’s within some products they make.
In other words, ChatGPT are great because it has made it avaiable to everyone who cannot even properly explain how many different types of artificial nerual networks exist (me included). It’s the way human civilization develops, one’s specialty is meant to benefit the general people.
Above all, learning is better driven by interest. Take the advantage of passion is the first priority. Then, a deep focus on a few things plus a general focus on many related things probably can be the best way to strengthen you skill set.
You must have tried to create a new rust project with cargo:
cargo new
and the generated main.rs is the most straightforward:
fn main() {
println!("Hello, world!");
}
With cargo run it does the expected thing:
Compiling rust-test v0.1.0 (/home/fam/test/rust-test)
Finished dev [unoptimized + debuginfo] target(s) in 0.37s
Running `target/debug/rust-test`
Hello, world!
However, rust is quite peculiar in how its standard library, particularly, the println! macro above, deals with stdout.
How is that? If your stdio is unusually set to non-blocking mode, the above can panic!
To demostrate it, let’s move the print into a loop to make the program run longer, while we do something with the terminal. Change the main function to:
fn main() {
loop { println!("Hello, world!"); }
}
This will create a deadloop and print infinite lines of “Hello, World!” in your terminal. Let’s run it; once we see the output, press Ctrl-Z to stop. Then we can run bg command to let it continue in the background. It will continue to print quickly, until we type:
tmux<Enter>
Note that you cannot see the echo when you type because the screen is flushed by the hello world output. That doesn’t matter, just press the five keys blindly.
Did you notice the flooding is stopped? If you quit tmux (ctrl-d), you can see this line on your screen:
[1]+ Exit 101 cargo run
Our hello-world has exited. Something went wrong.
The reason is that a non-blocking tty can report busy while being written to by the hello world. Normally this doesn’t happen, but since both tmux and hello world share the same console terminal, and tmux is designed to set stdandard input to non-blocking mode in the beginning, it affects the standard output file descriptor of our hello world program too.
And the returned error (errno -EAGAIN), indicate the writer should retry later (ideally after polling the fd and getting a writable flag). Unfortunately rust stdlib (print_to) doesn’t understand this error because the non-blocking mode is considered “not supported”.
There is a github issue discussing this:
https://github.com/rust-lang/rust/issues/100673
TL;DR: you’re on your own if you have non-blocking stdio.
The conclusion is if you write Rust, use println! , or if a library you pull in (direcrtly or indirectly) uses println!, good luck with it! You probably have no control over the stdio regarding non-blocking flag, and to make it worse, it can change any time, like shown in the example above, which is all up to the user or sys admin. If that happens, your program is prone to crash and there is very little trace to analyse the failure because even the panic! info which is supposed to be on the stderr may not be printed for the same reason.
This is quite frustrating, because I always thought it’s somewhat easier to write robust programs with Rust, though this issue and the unhelpful replies from the rust deveopers are kind of indicating opposite.
To be fair, other languages can suffer from the same problem, in different degrees.
For example in C++, std::cout will drop the unwritten part upon -EAGAIN and set fail() flag; in C the number of written bytes is always returned by printf so there is also room for explicit error handling.
Python would throw an exception called BlockingIOError which is easy to handle in a try-catch block. Go will automatically retry so it “magically” just works, but potentially it can cause unnecessary busy loop if the busy status stays there for long.
In rust, the remedy will be always using writeln! instead of println! if you want to tolerate non-blocking stdio and not crash that easily; but that doesn’t help others’ code from crates.io which you don’t own.