11 April 2021
Preview of 'Everyone is still terrible at creating software at scale'

Everyone is still terrible at creating software at scale

"Sometimes I feel like there are two worlds in IT: the one you encounter on a daily basis, and the one you read about on HN. The only thing consistently "terrible" about both of them is the often expressed lack of humility towards solutions you don't "like", or more often, actually, solutions you don't understand.Gottfried Leibniz suffered no small amount of flak for his conclusion that the reason the world looks the way it does is because it's already "the best of all possible worlds": you may think you see an improvement, but you lack divine understanding and therefore you don't see how it would actually make things worse.While this is in my opinion a silly idea, I think most people who work in tech could use a bit of humility influenced by this line of thought: if a solution exists today, and it has yet to be replaced, it's at least possible that it's over all the best solution, and the reason you're not seeing that is because of a lack of understanding.Of course this is not always true, but if nothing else it would lead to more interesting discussions than those stemming from someone saying "if you're not using Kubernetes you're a moron" and another replying "learn how to sysadmin and you'll realize you don't need Kubernetes in the first place"."

"At scale software stops being a technology problem and becomes a people problem. And groups of humans don't scale.We naturally organize into hierarchies of small subgroups with limited interaction boundaries. Each subgroup will adopt slightly different methodologies, have different tooling preferences, will have integration and communication overhead with others, and so on.This can not be preventend, only mitigated. Which is perfectly fine. The important step is recognizing that software development is really not special [1], and that the limiting factor is the human element. Then you can build your organizational structures around these restrictions.[1] https://news.ycombinator.com/item?id=26713139"

"The tooling is simply not there, so every software project keeps pushing the boundary of what is possible in its own unique fragile way.People don't want solutions to yesterday's problems. These are considered trivial and already solved, such as invoking a shell command (which just hides a lot of complexity under the hood). Noone will pay you for invoking existing solutions. They pay you to push the boundary.By tooling I mean programming languages, frameworks, libraries and operating systems. All of whuch have been designed for a single machine operation with random access memory model.This no longer holds true. In order to scale software today you need multi machine operations and no such OS exists for it yet. Only immature attempts such as Kubernetes, whose ergonomics are far from the simolicity that you'd expect from a unix like system.And random access memory model breaks down completely, because it is just a leaky abstraction. For full performance memory is only accessed linearly. Any random access structured programming language completely breaks down for working with GBs of data, in parallel and on a cluster of machines.I don't think we'll push the boundary further if we keep piling crap on top of existing crap. The foundations have to change and for that we'd better go back to the 70s when these ideas were being explored."

Preview of 'Vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs'

Vgpu_unlock: Unlock vGPU functionality for consumer grade GPUs

"> In order to make these checks pass the hooks in vgpu_unlock_hooks.c will look for a ioremap call that maps the physical address range that contain the magic and key values, recalculate the addresses of those values into the virtual address space of the kernel module, monitor memcpy operations reading at those addresses, and if such an operation occurs, keep a copy of the value until both are known, locate the lookup tables in the .rodata section of nv-kernel.o, find the signature and data bocks, validate the signature, decrypt the blocks, edit the PCI device ID in the decrypted data, reencrypt the blocks, regenerate the signature and insert the magic, blocks and signature into the table of vGPU capable magic values. And that's what they do.I'm very grateful I wasn't required to figure that out."

"Amazing! Simply amazing!This not only enables the use of GPGPU on VMs, but also enables the use of a single GPU to virtualize Windows video games from Linux!This means that one of the major problems with Linux on the desktop for power users goes away, and it also means that we can now deploy Linux only GPU tech such as HIP on any operating system that supports this trick!"

"Dual booting is for chumps. If I could run a base Linux system and arbitrarily run fully hardware accelerated VMs of multiple Linux distros, BSDs and Windows, I'd be all over that. I could pretend here that I really need the ability to quickly switch between OSes, that I'd like VM-based snapshots, or that I have big use cases to multiplex the hardware power in my desktop box like that. I really don't need it. I just want it.I really hope Intel sees this as an opportunity for their DG2 graphics cards due out later this year.If anyone from Intel is reading this: if you guys want to carve out a niche for yourself, and have power users advocate for your hardware - this is it. Enable SR-IOV for your upcoming Xe DG2 GPU line just as you do for your Xe integrated graphics. Just observe the lengths that people go to for their Nvidia cards, injecting code into their proprietary drivers just to run this. You can make this a champion feature just by not disabling something your hardware can already do. Add some driver support for it in the mix and you'll have an instant enthusiast fanbase for years to come."

Preview of 'A man is looking for the friends who shipped him overseas in a crate in 1965'

A man is looking for the friends who shipped him overseas in a crate in 1965

"A photo of the actual crate he stowed away in (his rescuer is in the crate):https://ichef.bbci.co.uk/news/976/media/images/82156000/jpg/...From this 2015 article: https://www.bbc.com/news/magazine-32151053It really helps to picture being stuck upside down in the thing. I thought, “surely you could gradually rotate yourself the right way up” before I saw the photo."

"> At one point, Robson says he was left upside down on a tarmac, literally sitting on his head for 24 hours because there wasn't enough room in the crate to turn around.I'm surprised he survived this. I remember when David Blaine hung upside down in Central Park for 3 days – but every few hours he had to take a break: https://www.dailymail.co.uk/tvshowbiz/article-1060038/Cheati..."

"From the article:> They covered the crate with labels that read "Fragile," "Handle with care" and "This side up." It was scheduled to fly from Melbourne to London within 36 hours. Robson ended up being inside that crate for five days. "It was terrifying," he said. "I was passing in and out of consciousness. I had a lack of oxygen. Oh, it was bad." There seemed to be an endless number of stopovers, and the airport crews didn't pay much attention to the crate's labels. At one point, Robson says he was left upside down on a tarmac, literally sitting on his head for 24 hours because there wasn't enough room in the crate to turn around.The rest of the article is great (in particular the way it ends), but reading the above excerpt made me shudder.The will power of this man is unbelievable.I hope he finds his friends."

Preview of 'You Don't Need a GUI'

You Don't Need a GUI

"GUI is better for discoverability of the most common scenarios (I can right-click to see everything that can be done with the file, including some third party programs). But it's bad for composing programs and interoperability, often it's impossible or very hard to automate (how many people are doing UI testing? there is a good reason it's not many -- it completely sucks)TUI/console is very good for interoperability/automation/muscle memory, but doing one-off things is harder. If I don't remember the `df` command... what do I do apart from searching on the internet? Maybe searching man pages, but it isn't as fuzzy (e.g. `man -K "free space"` isn't very fruitful). In comparison, I know that if I start going through GUI, eventually I'll eventually find the free disk space info.It's interesting that there is some analogy to object-oriented and functional paradigms -- if I have a rich object in a debugger in Python, I can call dir() on it to immediately see what I can do with it. Whereas if it's a simple dataclass, I'd need to search through code to find how I can use it. Might be a too far fetched analogy though.I think we need all of it, and also need to make the distinction less apparent. Would be nice to make GUIs more composable, and TUIs more discoverable. Ideally it should be a spectrum of interfaces, so you don't have to make a hard choice and can gradually move into the direction you want."

"I'm not sure if this is satire or not, but certainly the single worst case of using a terminal is file manipulation. Nothing will ever beat the right-clicks or Ctrl+C/Ctrl+V combos. It's just more natural to "see" things that you move.Now when it comes to more complex applications, I'd stop using the CLI if I had to search the manpages or the internet each time for doing something I already did previously. Since I discovered the Ctrl+R shortcut (for searching your history), it has been much easier. You read the manpage once, maybe you do an internet search, you enter the command and then you can find it back as long as it's still in your bash history (be sure to set HISTSIZE and HISTFILESIZE to correct values). I also have a DOCS.txt file where I put all the rare but useful commands I'm afraid of losing. I don't need to be an expert in ffmpeg's options (... though I sort of am now), I can just look at my history or my DOCS.txt file!"

"As much as I like the command line, I couldn't help noticing from the first 10 or so entries[1] that the "stop" text consists of one of:1. Drag and Drop2. Right clicking3. Ctrl-C and Ctrl-VSo these 3-5 things do everything in the list in a GUI, and instead the author wants us to learn 35 different command/syntax combinations?As an aside, I'd like to write an article saying "You Don't Need Github To Write an Article". If you insist on using Github for that purpose, at least do it properly with a static site generator.[1] Did not bother with the rest."

Preview of 'Nix is the ultimate DevOps toolkit'

Nix is the ultimate DevOps toolkit

"I tried Nix a few days ago. I set it up on an existing Arch install. I installed a couple of packages with "nix-env -i [package] and then tried to update them with "nix-env -u" as instructed in the official documentation: https://nixos.org/manual/nix/stable/#ch-basic-package-mgmtThis ended up breaking the entire install. After a few hours of troubleshooting I found that the reason it broke was that it updated itself from version "nix-2.3.10" to "nix-2.3.10-x86_64-unknown-linux-musl" because it saw that package's version string as a version bump. The suggestion in the github issue was to instead use an unofficial third party package for basic package management because this was a known, long-standing issue that is not likely to be fixed.https://github.com/NixOS/nixpkgs/issues/118481The experience came across as a massive red flag and I decided not to pursue it further."

"I spend a few hours looking at nix about a year ago and found it impenetrable.I simply do not grok the syntax or what the functions do. I tried searching for the functions shown in the examples on the website to no avail. I searched packages, options, and even resorted to ctrl-f while clicking through the site "documentation"...It sounds awesome, but its in dire need of some better documentation if it wants to be accessible, IMHO. I simply didnt have the patience to delve any deeper."

"Nix has a heavy learning curve and requires learning the language to feel comfortable. However, overcoming that hump is incredibly rewarding and allows for taming your system in a way that, for me at least, changed the way I look at composing software.At mindbuffer[1] we've started using it for our recent art installations. The big benefits for us are reproducibility, ease of deployment, and the ability to collaborate on the composition of the whole system. I.e. rather than sharing a README of how to install things one by one and hoping each of us has followed it correctly, we just work on the same set of config files via a git repo (like we would any other code) and can be sure we're all on the same page as a result.Very much looking forward to Nix 3.0 landing with all its UI improvements and flake support. It seems like these changes will go a long way to making Nix more accessible, and provide a smoother on-ramp to learning the language itself.https://mindbuffer.net/"

Preview of 'Setting up Starlink'

Setting up Starlink

"I'm using Starlink right now. AMA.I'm in East Idaho. Currently my dish angles itself to the north. It rarely moves itself north/south, and slightly moves east/west throughout the day. I've read that right now it locks onto a single satellite, although they're adding multi-satellite support later.My speeds are inconsistent, and interestingly they start slow (around 60 Mbps) but after a couple seconds they'll get to 150-200 Mbps (which is awesome for downloads). Latency is consistently in the low 30ms. I get some downtime every day, so it really is a "beta" like they say. I have a backup WISP.Setup was literally take dish out of the box, insert into tripod (included), plug in cables, connect to the wireless routers SSID and activate with the starlink app. After that I put the included router into storage and plugged in my Protectli[1] running CentOS. Everything works great. My only complaint is the CGNAT, but given the difficulty associated with procuring IPv4 addresses, it's understandable.[1] I love this thing. Highly recommend: https://smile.amazon.com/gp/product/B0741F634J/ref=ppx_yo_dt..."

"Wow, I'm impressed! I thought it would take more than a decade to implement when it was announced, but it's here and it works. Just amazing.100/16 Mbps is pretty decent I guess, hopefully it doesn't go down as the number of users goes up. The latency is great imo, 40ms using satellites? I don't think anyone has achieved that before.Would a bigger dish work better or not?"

"The cell grouping is interesting. A colleague likes the outdoors, so he's preordered one for his Suzuki Jimny, to mount on it. I wonder if Starlink are considering this use case.I haven't been able to preorder mine, because we're planning on moving out from the city to a small village next year, but the Starlink website requires a street address.Our villages are quite primitive, no street names (I think it's cos nobody's thought of it). So, the nearest town where there's street names, is quite far.I was feeling uneasy about using it as an address, this article sort of cements that concern.I have 50/50Mbps fiber, but reckon we could still be served by 20 down if needed. Exciting!"

Preview of 'Home-Built Scanning Tunneling Microscope (2015)'

Home-Built Scanning Tunneling Microscope (2015)

"His day-job is pretty mind-blowing, too. He basically works on combing DNA, by squashing it "into nanogrooves [that] are less than 50 nm"https://dberard.com/research/edit: added description of his work"

"This project is great.Last semester at university lab, our group tried to rebuild it in our lab.Through our spectacular fail, we learned how amazing this project actually is. It is harder than it seems.One can learn very much about piezo electronics, especially about OPamp circuits. And do not underrate the damping!"

"I worked with a simple STM in a lab exercise. At the beginning, we created a new, clean tip. Guess how we did that?...We cut a piece of ordinary steel wire diagonally with an ordinary wire cutter. Somehow we did it right - the first one worked. It was all somewhat fiddly and sometimes the image was noisy. But we did get images of atoms."

Preview of 'FFmpeg 4.4'

FFmpeg 4.4

"We're all thinking it, but holy cow is FFmpeg one of the craziest pieces of software I have on my computer. The amount of neat things I've been able to do with it and toss them into a bash loop and go to bed is incredible. I'm sure a lot of us get a kick out of automating our tasks, and FFmpeg is king when it comes to that."

"I think everyone was a txt file on their computer filled with FFmpeg commands.Care to share yours?"

"> Rayman 2 APM muxer> LEGO Racers ALP (.tun & .pcm) muxerMaybe that just shows off my ignorance, but reading the changelogs (current and past), I never realized that ffmpeg contains so many "niche" features."

Preview of 'Best practices for writing SQL queries'

Best practices for writing SQL queries

"Avoid functions in WHERE clausesAvoid them on the column-side of expressions. This is called sargability [1], and refers to the ability of the query engine to limit the search to a specific index entry or data range. For example, WHERE SUBSTRING(field, 1, 1) = "A" will still cause a full table scan and the SUBSTRING function will be evaluated for every row, while WHERE field LIKE "A%" can use a partial index scan, provided an index on the field column exists.Prefer = to LIKEAnd therefore this advice is wrong. As long as your LIKE expression doesn't start with a wildcard, LIKE can use an index just fine.Filter with WHERE before HAVINGThis usually isn't an issue, because the search terms you would use under HAVING can't be used in the WHERE clause. But yes, the other way around is possible, so the rule of thumb is: if the condition can be evaluated in the WHERE clause, it should be.WITHBe aware that not all database engines perform predicate propagation across CTE boundaries. That is, a query like this: WITH allRows AS ( SELECT id, result = difficult_calculation(col) FROM table) SELECT result FROM allRows WHERE id = 15; might cause the database engine to perform difficult_calculation() on all rows, not just row 15. All big databases support this nowadays, but it's not a given.[1] https://en.wikipedia.org/wiki/Sargable"

"> Although it’s possible to join using a WHERE clause (an implicit join), prefer an explicit JOIN instead, as the ON keyword can take advantage of the database’s index.This implies that WHERE style join can't use indices.I can understand why some would prefer either syntax for readability/style reasons. But the idea that one uses indices and the other not, seems highly dubious.Looking at the postgres manual [1], the WHERE syntax is clearly presented as the main way of inner joining tables. The JOIN syntax is described as an "alternative syntax":> This [INNER JOIN] syntax is not as commonly used as the one above, but we show it here to help you understand the following topics.Maybe some database somewhere cannot optimise queries properly unless JOIN is used? Or is this just FUD?[1] https://www.postgresql.org/docs/13/tutorial-join.html"

"This is an aside, but a colleague years back showed me his preferred method formatting SQL statements, and I've always found it to be the best in terms of readability, I just wish there was more automated tool support for this format. The idea is to line up the first value from each clause. Visually it makes it extremely easy to "chunk" the statement by clause, e.g.: SELECT a.foo, b.bar, g.zed FROM alpha a JOIN beta b ON a.id = b.alpha_id LEFT JOIN gamma g ON b.id = g.beta_id WHERE a.val > 1 AND b.col < 2 ORDER BY a.foo"

Preview of 'Why do long options start with two dashes? (2019)'

Why do long options start with two dashes? (2019)

"It is sad that many new command-line parsing libraries don't follow the GNU rules anymore. They more often use "-long". Then users have to figure out whether this means "--long" or "-l -o -n -g". To make command line even more confusing, multiple tools I have used allow spaces in optional arguments (e.g. "-opt1 arg1 arg2 -opt2", where arg1 and arg2 set two values for -opt1). Every time I see this, I worry if I could be misusing these tools. I wish everyone could just follow getopt_long() and stop inventing their own weird syntax."

"I love the "long options start with two dashes" convention. It means that you can chose short options that are easily combined (in cases where the command and its options are often used), or you can use long options that are much easier to understand (because they are full words). More command line tools should support them."

"It always confused me when tools don't follow that rule. Eg: "find" where "find . --name '.dat'" won't work but "find . -name '.dat'" will and it's not the only one"

Fork me on GitHub