Or, "why people who block access to their website based on User-Agent string are incompetent fuckwits" (more in the series "why people who X are incompetent fuckwits" to come later).
I'm not much of a web developer; my skills hew mainly to what the kids these days call "backend" (or possibly the kids from years ago; I remember when "DevOps" was called "a programmer with sysadmin skills". God I'm old). And yeah, I get grumpy about things that I'm not fully aware of (it's a problem I'm working on). Combine this with my lack of subdomain expertise, and IME, IMHO, YMMV, TTWaGoS, IANAL, OMGWTFBBQ (TWAJS).
But here's the thing: the User-Agent is completely fungible - it can easily be changed by the end user. It's not unique, and there are so gorram many of them by now, you're playing whack-a-mole trying to keep up with every one out there.
(And this is before we even get to what amount to plaintext websites not working with text mode browsers because some asshole designer thought they needed to shittify the content that people actually care about with bloated, unnecessary javascript; another rant for another day)
I'm not the only one who has come to this reasoned conclusion - the first result in Google is a Mozilla Developer Network article that says the following:
When considering using the user agent string to detect which browser is being used, your first step is to try to avoid it if possible.
And specifically:
Are you trying to check for the existence of a specific feature? [...] This is the worst reason to use user agent detection because odds are eventually all the other browsers will catch up. In addition, it is not practical to test every one of the less popular browsers and test for those Web features. You should never do user agent sniffing. There is always the alternative of doing feature detection instead.
(bolding mine)
And then they continue on to list code examples of how to properly detect feature support in a browser.
So, to the developers of Slack, who make me change my User-Agent before I even fucking login, fuck you, you incompetent fuckwits.
Also, fix your fucking preferences - I shouldn't have to turn on dark mode every gorram time I login. What is this, the 1990's running off a CD-ROM? No, you don't need a cookie on my machine to record it, just flip a bit in the database entry for my account that you are already storing on your servers.
No web developers were hurt in the making of this blog post, although some should have been. Possibly feelings will be hurt, and valid claims could be made that this post is incendiary, profane, unprofessional, and doesn't help solve the problem, but it's not here to fix things, it's my outlet. If you really want to "git gud noob", try these:posted at: 19:28 | path: | permanent link to this entry
So, I'm struggling my way through "Lisp Web Tales" after having cried over Marc-André Leclerc then having chased that with Michelle Wolf, and having nearly finished a bottle of Merlot, when I think "hey, I should setup a blog again." Only I'm too fucking lazy to write one from scratch (and the one in "Lisp Web Tales" is broken; fix later), so I figure, WTF, just setup blosxom again.
Shazam! I have a blog again (god I fucking love Debian). Eventually I might (maybe) write something from scratch, mostly to prove I can and because I'm bored and need the practice, but for now, this will do. Until then, I found something that tickles the little gray cells (note to self: work Poirot's theme): alist or plist?
And yes, I did just "steal" someone else's much awesomer (and soberer (probably)) blog post from nearly ten years ago (fuck off, Lisp is timeless). Expect a lot of that.
Oh, and I'm also hiking the PCT. Stay tuned.
Also, the server is on UTC, and I had to correct some URL's in previous blog posts, so the order is fuxxored; if I get bored I may go back and fix them since I still have the original files with timestamps and touch(1) is rather versatile.
posted at: 08:06 | path: | permanent link to this entry
I hiked Black mountain, this time from the Garlock road side
posted at: 08:01 | path: | permanent link to this entry
While I'm no expert on git, I've used it enough to have formed some opinions, and the first one is that people who call it "overly complicated" or "not user friendly" are wrong. This simple fact is not up for debate.
Where things get interesting is when you decide certain things like whether or not to have a linear history (via rebase) or to use merge commits in all their soul-crushing hairy glory. You might be able to guess where I stand on this matter, but let me just try to convince you, as others have tried to convince me:
Take the argument that merge --rebase will break your unit tests, and there's nothing you can do about it. While it is commendable to ensure that every commit builds and passes tests, the assertion that merge rebase will break tests irreperably is flatly not true. How do I know? Because, I merge-rebase all the time, and have a strict policy of all commits to the public repository (origin/master) will build and pass tests.
And how do I accomplish this feat? Simple: rebase. The answer is always rebase. Made a typo in your commit message? Rebase (although this has handily been aliased to "commit --amend", it's still technically a rebase). Have a commit that reverts a commit immediately preceding it? Rebase-squash cancels them out. Have a commit with unit test for a feature implemented in the next commit? Rebase them into one.
So, just for more detail, here's a brief overview of my setup: Gerrit hooked up to Jenkins, where every commit must compile and pass over 300 unit/regression tests on six different build configurations between two compilers with every single warning as an error, on two operating systems, pass a wide variety of linters, and also have everything documented down to parameters and return values.
This setup doesn't allow for broken pushes, and if someone pushes something that passes all the automated tests and a human code reviews and accepts it, you will have to pull those changes down and replay your work on top of them. How do you do that? I'll give you a hint: it's one word and the output message from the command sounds similar to my last sentence (hint: it's rebase).
"But what if there's a conflict?" you say. If there's a conflict, then merge would have had a conflict to resolve as well, and at least you know that since you were slow on the draw, you have to change your code to accomodate the code that's already been accepted. "But what if upstream's changes break my changes?" you say. Well, then fix your changes. "But what if it breaks my new, as yet unaccepted tests?" If the previously pushed code is broken, treat it as a bugfix, fix it on a different branch, squash the test and fix into one commit with rebase, and push that, then switch to your original branch, rebase it on top of your bugfix/test change and continue working on your other changes. No matter what version control you use, there will be cases where you have to integrate your changes with other peoples' changes, and part of being a professional is handling this with grace (or just making your changes modular and isolated enough that they don't conflict in the first place).
Remember: in my system, any change is rejected if it breaks the tests. Therefore it's impossible to get a change in, merge or rebase, that will break the tests. If you're adding a new test to catch a bug, good! But sometimes that means you have to fix the bug, even if you didn't create it, and even if it didn't exist when you created the test.
So the real question becomes, if it works for both rebase and merge, why not use merge? Well, I'll admit this comes down to personal preference and aesthetics, but I feel with good justification: I don't care about every time a developer reverted an immediately preceding change because they tried something out and found out it didn't work or wasn't what they wanted. I don't want to see the often confused, muddled thought process that eventually made it's way to a working piece of code. That shouldn't be in the public repo, and I definitely don't want to have to bisect over it. I want a linear, coherent, cleaned up public repository history that I can quickly and easily bisect, without having to do exponential diving trips down different branches. You get a clean history by rebasing before pushing to the public repository.
posted at: 05:30 | path: | permanent link to this entry
I headed to Red Rocks over the four day weekend for some fantastic climbing. The weather was moderately hot, but we got in plenty of climbing. I was joined by Luke Swanson and Alex Satonik, and we all managed to hop on lead on routes including "Dream of Wild Turkeys", "Black Orpheus" and "Group Therapy".
posted at: 19:25 | path: | permanent link to this entry
I've always been selective in what media I choose to consume. Due to the fact that experiencing most media is a passive activity best kept to a minimum, it logically follows that one should select only specimens of superior quality. That's not to say I don't occasionally indulge in a guilty pleasure, such as watching "Johnny Mnemonic".
In recent years, however, availability of quality entertainment has taken a noticeable downturn. It's not clear why, but I'm going to hypothesize that the issue is with you, the average moviegoer. Do me a favor and help me with this test: go to How it Should Have Ended and look up your favorite movie. Watch the flaws and bad writing pointed out, even if only lightheartedly. Now go to Screen Junkies and look up and watch the Honest Trailer for the same movie. Repeat, this time with CinemaSins. Just for completeness, go and read the plaintext review of the same movie at The Filthy Critic.
Did you notice a pattern? Perhaps that movie you thought was so awesome really wasn't very good. That "mindbender" you've loved for years isn't all that smart. The fact that you like these movies means you've probably paid to watch them, which means that you've lowered your standards and Hollywood is pandering to those low standards because they're easy to meet.
Don't believe me? I'm not the only one with this idea: people with more knowledge of moviemaking than I have come to the same conclusion. Movies are generally bad, they're getting worse, and they're not going to get better any time soon.
So what can you do? Stop. Watching them. Stop wasting your money, and more importantly, your time, on drivel. Go out and have your own adventures. This goes for games as well, which, while they have slightly more interactivity and stimuli, still have the same issues.
posted at: 01:29 | path: | permanent link to this entry
Many moons ago, I remember reading a very insightful screed against a language that should have been put to pasture long ago: PHP. I guffawed, shook my head and smiled smugly that at least I wasn't using PHP.
Until my webmail (written in PHP) got hacked into. PHP would be a nightmare we could all wake up from, if there wasn't so much software written in it.
And then today, I get an email, announcing that yet again, they've found more security holes in PHP. Not just one hole, but many; here's the list:
Welcome to PHP, where NUL terminating your strings will allow an attacker to overwrite files on your server, and attackers have no end of options for arbitrarily executing code!
It's like sendmail and bind - they've had their use, seen their heyday, but anyone with a lick of sense and competence in the field knows that they're so full of holes you don't use them without a team of at least 10 admins. Meanwhile, qmail and djbdns work just fine for multiple domains being run by one guy on absolutely minimal time (something like 1/8 of full time job).
Seriously, though, this is a call to arms, a call to war even: a war on PHP. Developers, stop writing PHP. Admins, remove PHP from your systems, dev seats and servers included. Hosting providers, stop supporting PHP and instead setup something better like Python or Perl instead. The line must be drawn here, no further. This has to end now. For the good of all humanity, please make it stop!
posted at: 00:03 | path: | permanent link to this entry
Alex Satonik joined me for this little adventure in climbing 11,000+ft in a day, and Tom Roseman graciously offered to shuttle a vehicle so we could hike down to Mahogany flat when we were done.
We couldn't have asked for a more perfect day; the weather was clear all day, with minimal snow on trail and the occasional light chilly breeze. We not only had a full moon to start hiking by (at 2 in the morning), but got to see it fully eclipsed before it disappeared behind the ridgeline due to our going up Hanaupah canyon.
There was running water (snowmelt) in Hanaupah canyon which we took advantage of at daybreak, and then easily found our way up to the ridgeline. It was a long uphill hike to the trail in the saddle near Eagle spring, but we made it in good time, then headed to the peak after a short lunch break.
We met Tom coming up the trail to bag the peak as we were coming down, and waited for him at the car, after which we drove home. Alex and I considered bagging Bennet and Rogers on the way back, but decided against due to running out of water.
posted at: 19:00 | path: | permanent link to this entry
For more conditioning for my Badwater to Telescope trip, Alex Satonik suggested we day hike Olancha peak. I've hiked it as a winter overnight with Bob Huey and Bill Stratton (photos, GPS track), but never in a day without enough snow to snowshoe, so I figured it would be a good hike. Alex's spouse Sarah Herrington joined us.
The weather wasn't bad, a bit chilly at times, a bit windy. It was a long, tough hike, but we all made the peak (and back)!
posted at: 17:32 | path: | permanent link to this entry
To scout out Telescope peak for the trip from Badwater to Telescope, I decided to hike Wildrose peak. I was joined by Jeff Green, Alex Satonik, and two of Jeff's friends, Shelly and Steve.
It started out a chilly day, and was very windy (cold wind!) on the top, but then turned into a warm day on the way back. We placed a summit register as the current ones were all full.
We drove the Wildrose road on the way up, but decided to take the more scenic and longer route on the way back. We also scouted out the Mahogany flat campground road and found it very driveable.
posted at: 20:41 | path: | permanent link to this entry
More cold weather hiking; this time I headed up Whitney portal road, but not quite all the way to the portals. Instead, I opted to saunter up Meysan drainage to make an attempt on Irvine or Mallory. Alex Satonik joined me (and kicked my ass uphill), but we got a late start so we didn't make any peaks. Still a good day, with beautiful (but cold!) weather.
posted at: 21:47 | path: | permanent link to this entry
I needed to get to altitude and work on my aerobic conditioning, so I lead a hike up Owens peak this past Sunday, joined by James Rogers and Dr. Bill Ferguson. It was chilly, but a beautiful day, with snow (rime) on the trees, and low clouds.
posted at: 03:32 | path: | permanent link to this entry
I like to go on weeklong backpack trips. Day hikes, day climbs, or even overnight trips are nice, but there's something that happens when you're out on the trail for more than a couple of days. Sure, you get fairly fragrant, even with a daily rinse in a stream or lake. But you also enter into a certain mindset. You forget what day of the week it is. You don't worry about daily chores such as checking email. Your only concerns are hiking to the next camp, setting up camp and making dinner.
With so little to focus on, what do you do? I've taken books in the past, which can be richly rewarding from the point of view of focusing so fully on one thing. While that might seem austere compared to the multifaceted distractions of a smartphone, even leaving the book behind can be its own reward.
You gain time to reflect, to let your thoughts settle and pause to think about the bigger picture. Not in any sort of compelling, life-altering sort of way, but just to meditate serenely. I know this may sound like a bunch of New Age bullshit, but there's something about clearing one's head with nothing but the beauty of the surrounding environment to distract you. The exercise also helps to boost positive feelings via endorphins.
Anyway, all this is just to preface an entry on my latest foray into the wild: Rae Lakes loop. It's a nice little trip in the Sequoia Kings Canyon National Park. The majority of trip reports talk about starting this loop from the West, beginning at Road's End, but living on the Eastern side of the Sierras, I decided to enter via Kearsarge pass.
I didn't get any takers from the mountain rescue group except Jeff for a day hike of Gould, so my father and I turned it into a leisurely hike. Water was in abundance, although there were some long dry stretches of trail where it was inaccesible. Plenty of beautiful scenery and excellent weather - a bit warm at lower elevations, and not too cold at higher elevations, even for the last night when we camped at 11200ft.
We met a bunch of PCTers and JMTers once we hit that section of the loop. We also attempted Rixford from Glen pass via the ridgeline (we'd rather scramble over rock than go up the grunge), but got cliffed out and eventually gave up.
Saw deer, squirrels, chipmunks, tadpoles, fish, mosquitoes and plenty of birds. This also marks the first hike where I saw a snake, albeit he had a hole in his side that flies were going in and out of.
2014-06-29 to 07-05: Rae Lakes loop photosposted at: 02:23 | path: | permanent link to this entry
One idea I had come up with recently was to take full advantage of full moon nights and also the Summer solstice (which was June 21 this year). I also had someone recommend to me to hike Kern peak from Horseshoe meadows. I decided to try making it there and back in a day, putting the 14+ hours of sunlight on the Summer solstice to good use. Even then, it would be difficult because it's 15 miles one way to the peak. As part of training I hiked Kearsarge pass on May 10. I didn't get out again until June 08 to explore Horseshoe meadows, Trail pass, and Mulkey Pass. And I went one last time to place a water cache the weekend before on June 15.
The big day approached and I drove up the night before to car camp at Horeseshoe meadows so I could acclimate and get an early start. I made the peak!
Photos of Kern peak hiked from Horseshoe meadows
It was a long day, and I ended up hiking back in the dark at the end, but I managed to cover all 35.2 miles in one day. It was pretty straightforward as there was trail all the way, although it's not maintained (there were some downed trees) and not very well defined in places. The remnants of a fire lookout are still on top, although they've been taken over by the birds.
posted at: 02:27 | path: | permanent link to this entry
One of the big benefits of the Internet is that it "levels the playing field" by eliminating barriers to communication and allowing anyone their own "soap box" to put forth their opinions. Indeed, the reduction of friction in conducting a back and forth discussion is lauded as one of the big reasons to have comments on your blog.
So, why don't I have comments? In three words: too much work. I actually chose a blogging platform that doesn't support comments out of the box (it's very minimalist), and for me, that's a good thing. I'm all for free expression, the cut and thrust of debate. But that's not what many others are interested in.
There are those who would post hateful, vitriolic comments of no substance; while I'm all for free speech, that's not the kind of speech I would like to encourage. There are also others who would use my blog as a platform to peddle all manner of "goods", and I'm also not interested in encouraging that kind of speech. Quite frankly, taking care of these problems has known solutions, but they are painfully time consuming, and if there is something I've learned from reading even good comments, it's that I don't have that kind of time.
There's also the matter of my server, my rules. Yes, I understand you may have something insightful to say, or you feel really strongly that you should be able to respond to my posts right below them. Too bad. I'm paying for the hardware, the electricity, the bandwidth, the domain name, not to mention my already mentioned scarce time.
These days, it's so easy to get your own blog, or even find forums that you can post a rebuttal in, that not taking the time and effort to use those avenues of discourse tells me all I need to know about your comments: if they're not worth putting in the effort to host them yourself and to attach your own name to them, then they are probably not worth my time to read. Get your own blog; it's easy!
What I do find most ironic about the link to Coding Horror given above is that he doesn't have comments; oh sure, he has a Discourse "forum" setup for every blog post there, and he makes several good points in that blog post (and his other on "real blogs"), but he then cites a very similar reason to mine for the separation: "here's a fairly strong, but permeable, membrane between the editorial area here and the community area there. This is intentional." So much for not creating a pulpit.
I'm not trying to be elitist; on the contrary, much like the phrase "patches welcome", I'm inviting you, dear reader, to elevate yourself and dedicate some time, effort and most of all, thought, to any sort of rebuttal you may have. Start a blog of your own! And if you want to complain that you don't have that time, effort, or thought, then why should I dedicate any of mine to helping you spread your opinion?
posted at: 02:18 | path: | permanent link to this entry
In the spirit of "Thank you for giving me the opportunity to explain this to you" (TL;DR - free software doesn't have "end" users, that's the point), I'd like to thank Google for forcing my hand.
You see, I've been thinking for some time now that I shouldn't be so reliant upon other people's servers, or more importantly, other people's closed source software. To be sure, I have rid my life as much as possible of things I cannot fix, such as Windows and OSX, but I had fallen into relying upon things such as Google Maps to share GPS tracks of my hikes.
It was pretty cool and handy to be able to post my GPS tracks (converted to KMZ in Google Earth) to my web server, and then tell Google Maps to load the link and show the topographical layer, then share a link to that with friends and family. As opposed to a static image that couldn't be explored, or sharing a file that would have to be downloaded and opened in a separate program, this allowed people to click on a link and instantly be able to zoom around and get a feel for where I had traipsed off to.
But then Google decided to "improve" the interface to Maps, and much like the "improvements" they've made to their standard search engine over the years, they have let some features evaporate. I now can't just paste in a simple URL to a KMZ file on another server. I can't even begin to figure out how to get Google Maps to let me reproduce the perfectly good functionality that had existed up until recently. I suspect I have to sign in to some picayune social network to accomplish the same thing now.
No thanks. I've been researching alternatives for quite some time now, and while it is not quite as slick as Google Maps, at least it does what I want, and I have control over what features do and do not exist in the product.
So while I had been thinking about doing this for a while, I'd like to thank Google for helping to make my choice clear, and helping me to remove one more piece of closed source software (Google Maps) from my life. Thank you Google!
EDIT: So, trying this again, it appears to be working now (2014-03-29). Not sure why, but it still stands that I'd like to have a photo gallery of my own, with a slideshow feature and be able to display a track overlaid on a topo map with links to photos geolocated on the map. Open source and open data required, plus I'd prefer something lightweight and not written in PHP, so phpMyGPX, while nice, doesn't quite fit my needs. I'm currently looking to Python and the multitudinous libraries thereof (mapnik, Kartograph, and gpxpy are just a small sampling). There's also some really interesting things going on at OctoMap, which just so happens to overlap with the current direction of the project at my day job, not to mention that it brings me back closer to the work at ROS that I had leveraged for another project at work and also had to do with mapping. Fun times!
posted at: 20:12 | path: | permanent link to this entry
On Sunday and Monday, I went on Bob Huey's overnight trip to Mt. Whitney, but we bailed and bagged Wotans Throne. Photographs.
posted at: 17:55 | path: | permanent link to this entry
I hiked Skinner peak with Bob Rockwell, Linda Finco, Dave Doerr, Tom Sakai, Mike Myers and Walter Runkle on 2014, February 08. Was a fairly blustery day, but a beautiful hike nonetheless.
Photos and GPS tracks.
posted at: 17:44 | path: | permanent link to this entry
Reading Hacker News, I came across someone's goal of reading a man page a day. The discussion around that post quickly turned to how to pick manual pages, and I decided to try to come up with a way to automatically load a random one in Emacs. Here's the result:
;; Taken from http://emacswiki.org/emacs/ElispCookbook#toc57 (defun directory-dirs (dir) "Find all directories in DIR." (unless (file-directory-p dir) (error "Not a directory `%s'" dir)) (let ((dir (directory-file-name dir)) (dirs '()) (files (directory-files dir nil nil t))) (dolist (file files) (unless (member file '("." "..")) (let ((file (concat dir "/" file))) (when (file-directory-p file) (setq dirs (append (cons file (directory-dirs file)) dirs)))))) dirs)) ;; Taken from ;; http://stackoverflow.com/questions/3815467/stripping-duplicate-elements-in-a-list-of-strings-in-elisp (defun strip-duplicates (list) (let ((new-list nil)) (while list (when (and (car list) (not (member (car list) new-list))) (setq new-list (cons (car list) new-list))) (setq list (cdr list))) (nreverse new-list))) ;; Display a random manual page (defun open-random-man-page () (interactive) ;; Get manual page paths from the environment. (setq man-paths (parse-colon-path (getenv "MANPATH"))) ;; What if MANPATH isn't set or is empty? We'll take a guess: (if (eq man-paths nil) (setq man-paths (list "/usr/share/man"))) (setq man-dirs ()) (dolist (man-path man-paths) (setq man-dirs (append man-dirs (directory-dirs man-path)))) ;; Get a list of files in manual page paths. (setq files ()) (dolist (man-dir man-dirs) (setq files (append files (directory-files man-dir nil "^[^\.].*")))) ;; Fixup the files to be a list of man pages. (setq man-pages ()) (dolist (file files) (setq man-pages (cons (car (split-string file "\\." t)) man-pages))) (setq man-pages (strip-duplicates man-pages)) (random t) (princ "Selecting random manual page from " t) (princ (length man-pages) t) (princ " possibilities." t) (manual-entry (nth (random (length man-pages)) man-pages)))
I'm sure there's a more elegant way of doing this, but it works, mostly. It doesn't filter out subdirectories (such as man1), and only gives you the default manpage if there is more than one (such as read). Some other things caught my eye though: on my main home machine, this code finds roughly 20k manual pages, which would take over 50 years to read if you only read one a day. Of course, you could read more than one a day, or focus on things you are using or interested in. Another way to use this code is as a starting point: fire up a random manual page, then research more from there, or at least after reading it you are aware of "the tip of the iceberg" (especially in the case of many libraries with man pages).
posted at: 18:43 | path: | permanent link to this entry
Recently, I was tracking down a crash caused by mixing memory allocation/deallocation functions, and in the course of trying to create a solution, I came across something puzzling. I was attempting to recreate the canonical pedagogical example of passing or returning by value in C++, which normally makes use of the copy-constructor. Indeed this is what most textbooks on C++ claim. Yet the following code produces unexpected output upon execution:
/*BINFMTCXX: -DSTANDALONE */ // For std::cout and std::endl. # include <iostream> class MyClass { public: MyClass(): m_ii(0) { std::cout << "MyClass()" << std::endl; } MyClass(const MyClass&): m_ii(1) { std::cout << "MyClass::MyClass(const MyClass&)" << std::endl; } int m_ii; }; MyClass myFunc() { MyClass tmp; return tmp; } int main(void) { MyClass a(myFunc()); std::cout << a.m_ii << std::endl; } // int main(int argc, char* argv[])
Running this, I get
MyClass()
0
Only one constructor is called and it's not the copy-constructor. Why is this? You can try all sorts of methods to "force" the compiler to call the copy constructor, but ultimately, the compiler probably knows more than you. In the end, I tried modifying the class in the function before returning it, passing a value derived at run-time to make sure that couldn't be optimized; I tried privatizing the copy constructor as it obviously wasn't being called; I even tried throwing an exception in the copy constructor. Nothing worked! Until I tried this:
MyClass myFunc(const MyClass& orig_) { return orig_; } int main(void) { MyClass orig; MyClass a(myFunc(orig)); std::cout << a.m_ii << std::endl; } // int main(int argc, char* argv[])
The key here is that the copy constructor is required.
Previously, it was obvious to the compiler that tmp
wouldn't exist outside of myFunc
, so quietly eliding all
those constructors (one for tmp
, one to copy
tmp
into a
) and destructors was the logical
thing to do. Only when the compiler couldn't get away with that did we
force its hand and make it use the copy constructor.
As an aside, the flag -fno-elide-constructors
will force
GCC to use copy constructors for all pass by value operations.
Interestingly, if you make the copy constructor private, GCC will not
compile the first example, claiming that the copy constructor is
required for return by value.
posted at: 04:23 | path: | permanent link to this entry
Lately, there's been a spate of writings in response to a controversial one entitled "Please Don't Learn to Code". People have been exhorted to "Please Learn to Code", "Please Don't Become Anything, Especially Not A Programmer", etc, etc. While I'd be loathe to tell anyone what to do (anarchists never ruling and all that), I would like to offer some friendly advice to those willing to take it: learn.
Learn something, anything, just don't sit still. Those who don't learn history are doomed to repeat it, and if you're learning something, it's probably already history (of which I'll get more into in a bit).
In particular on programming: it couldn't hurt to know how to slap together a bit of code to quickly solve a problem. That's how a lot of great programmers got started. I'd also like to think that the original article is not discouraging people for fear of competition; he actually makes it quite clear that what he would like to discourage is more bad code coming into existence, and he thinks that by discouraging non-programmers from learning to code that this will help. That's where I think he goes wrong.
If anything, the people who should either learn how to code properly or not even bother are the ones who are already writing software. You think I'm joking? I've seen enough bad code to realize where Atwood is coming from. I'd rather that when someone wants a problem solved, they don't dredge up something entirely inappropriate and unmaintainable, but actually ask to have something designed properly from the get go. If someone has tried to learn how to write good software, but can't (or worse, won't), then they should stop giving their code to others. Please note that I didn't say they should stop coding, just that they shouldn't let their little monstrosities into the world.
And here's where we have the dichotomy. Should people, in general, have a better understanding of computers and how they work, even being able to program them, so that they can solve problems on their own? Most definitely yes. Programming is an incredibly liberating and empowering experience. You can create whole worlds, universes even, while programming. I can't recommend it strongly enough to everyone.
Now comes the other side of the coin: good software is hard to get right. It takes practice, perseverence, study and discipline. Should people release their dirty little one-off hacks, just to try to be helpful? I appreciate the sentiment, but that kind of thinking got us PHP. Programming is fun, and can be extremely useful in solving problems. But not all code is created equal.
Code to be released to others has to have some sort of value. If it only solves your problem, who cares besides you? It might be able to be worked into something more flexible, more reusable, possibly even something bug free. But is it really worth the effort? Especially when designing something properly from the start would get you the same results with less overall effort?
Again, I don't want to discourage anyone from learning, and here's where history comes in. You need to read. Sure, you may have written a script to automate some dreary task, and now you want to do more. Great! Pick up the classics. Read "Mythical Man-Month" and learn not just why planning is important, but why throwing more monkeys at the code won't help, and maybe most importantly, learn what we've lost. You do know that it used to be standard practice to have a fully working emulator for hardware before the hardware existed so that the systems programmers could work in parallell with the hardware guys, right? Or did you know that the mouse and teleconferencing have been around longer than most people think? Those nifty features in your current favorite programming language? Already in LISP fifty years ago.
The computing industry is full of history lessons that have been forgotten. It's also full of untold wonder. My advice is to learn, whether you are a new or old programmer. Learn to write good code; be a net positive and create gloriously beautiful works of art that solve incredibly hard engineering problems. When someone says it can't be done, ignore them. When someone says you shouldn't learn something, learn that thing.
And look at code, good code. How do you know good code? It's been around a while and people are still working on it. This might seem counterintuitive, but if it's so bad that it's unmaintainable, people will start over from scratch with something else. Granted, this is a pretty low bar, but if you look around at things like the Linux kernel you can learn all sorts of things. Don't be afraid to go on sites like GitHub and just start clicking on things. Find your favorite piece of software and download the source code, see if you can get it to compile.
Sure, you may not write practical code right away. You may be (rightfully!) shouted down for the first crummy patch you submit to an open source project. Just take your licks, learn from the experience, and improve your code. These things don't come overnight. And don't be ashamed if you can't learn how to code. Some studies suggest that a large percentage of the population can't learn to program. At that point you might want to move on to something else, so as not to slow down others by creating code they will have to fix. It's also okay if you don't like programming; there are enough of us who do. You don't have to be an expert at everything. You can't either.
I guess ultimately my practical advice would be to pick up Python and go through one of the many tutorials or books online. Why Python? It runs everywhere and is fairly easy to start with, but has enough power and extensions to make it applicable to just about anything. Plus it's not PHP or VisualBasic.
One last thing: that whole thing about Python having tons of extensions was a subtle hint. Don't reinvent the wheel. Sure, it's good practice to write code, maybe perform some katas, but don't create a piece of software that already exists unless you can do it better. Keep an eye out for libraries, extensions, toolkits and frameworks that will make your programming sessions that much more powerful.
posted at: 03:22 | path: | permanent link to this entry
Stop overwriting my gorram user ~/.bashrc. KTHXBYE.
(bug report and patch later; I'm kind of busy right now)
posted at: 20:13 | path: | permanent link to this entry
I probably should have thought of this earlier, but there are more than a few blogs entitled "Nathan's Musings". At least mine shows up fairly close to the top, but that also means it's too late to change it now. Or is it?
I've always been amused by words that have fallen out of fashion; it's funny where they turn up, and they have a certain flavor, a certain je ne sais quoi that make them interesting to me. Perhaps it's just that they are not common, something new (to me at least). Words like ablution and accoutrements.
I think we lose something when we let certain words fall out of use. Much like some foreign words (like simpatico) which the native speakers claim don't have an equivalent in english, forgetting words with different shades of meaning limits our expressiveness. Much as someone curses when they are angry and so they can't muster the concentration to form a proper epithet, we end up not saying exactly what we mean.
With all that said, I'm going to rename this blog to "Nathan's Lucubrations", mostly because it doesn't turn up a lot of search results in Google, but also because I like it. I can't guarantee all my entries from now on will be written at night, though ;)
posted at: 06:01 | path: | permanent link to this entry
Yet again, I'm pressed for time, so I'll toss another entry into my ongoing attempts to edify the Internet and lift worthy works of art out of the shadows. This time, it's "Vélos" by Boom Boom Beckett, yet another band whose music I came across whilst randomly sampling Jamendo. Unfortunately, they only have one album and a single up on Jamendo, but I have to say that both please me, albeit in different ways.
"Vélos" is a nice little laid back jazzy album, which, while not a masterpiece, definitely has some very nice tracks. The whole album is technically competent, but "M.eur Chagall", "Salsa di Soy" and "Oat Flakes" definitely make this album worth downloading for any jazz fan.
The single is a remix of "Salsa di Soy", and it takes it in a more techno-ish direction. Fairly amusing, especially if you've ever had an disagreement with a fellow performer over chords or notes.
posted at: 05:23 | path: | permanent link to this entry
Not a lot of time for an entry today, so I'm going to fall back on an idea I've had for a while. There's a lot of music out there, not all of it good. Used to be, the labels would find "good" music (for some definition of "good") and publish it. These days, the labels are (mostly) engaged in fucking people over through the legal system, especially since they've been made irrelevant by the Internet. The Internet, however, is not always that "good" in selecting "good" music (again, for some definition of "good"). What is an audiophile to do?
Well, I have some experience making (and listening) to music, so I figure I have as much of a right as anyone to post what I think is "good". Besides, it's my blog; if you don't like the music I like, go write your own blog. Hopefully you will find my selections at least interesting, and be thereby edified. If not, I'll give you a full refund.
Today's selection is Mud&Dust, a Trance/Psychedelic group with what I consider excellent programming music on an album also called Mud&Dust.
Oh, by the way, I will be posting only music that is available in the public domain or a reasonable creative commons license, because, a) you can try it for free, and b) we all really need to start boycotting big labels and their ridiculous copyrights. Big labels can suck it.
posted at: 00:11 | path: | permanent link to this entry
I just got back from the 2011 Linux Plumbers Conference. Definitely glad I went. It's been a while since I've done any serious hacking on the Linux kernel, so there was a lot of learning on my part. I'm glad to see that there is still a lot of momentum and excitement in the community. It's also nice to hear people admit some of Linux's shortcomings, and more importantly, discuss ways to remedy those shortcomings.
The energy was inspiring, and some of the conundrums brought up have tweaked my interest. I thought I'd jot a few things down that I think might be worth revisiting, and hopefully I will get time to look into them in future posts:
I hope to delve deeper into these topics later; who knows, if I manage to scrounge up enough time, I might even be able to test out some solutions to them.
posted at: 21:53 | path: | permanent link to this entry
One of my favorite Debian packages is Bastille, a collection of scripts to harden computers. Sure, I could do all these things myself by hand, but it's nice to have an automated method that covers most things I would do anyway.
Bastille is fairly thorough, and happens to cover more than just Debian. Unfortunately, it seems to have become unmaintained, and while the version packaged for Debian still works fairly well, there have been some cracks. For one, even the experimental Debian package of Bastille doesn't support the (current) stable distribution of Debian. There is a quick fix for the "not a supported operating system" problem, but the previous packaging of Bastille has a statoverride umask problem where it sets the permissions of critical executables to 0000 (no read, write, execute or anything for anyone, not even root!).
I'm hoping that something better comes along, or someone starts maintaining it; it's tempting to do it myself, but to be honest, I have neither the time, inclination nor expertise to keep Bastille current for anything other than Debian.
posted at: 22:19 | path: | permanent link to this entry
In case you don't have it installed, I highly recommend the bash-completion package (Debian and derivatives should be able to "apt-get install bash-completion" and source (via ". /etc/bash_completion") the scripts to get this working; pretty sure it comes standard and is sourced by default on fresh installs of squeeze). Among other things, it manages to autocomplete filenames. On other hosts. Over ssh. Without any (very) noticable delay (or maybe the beer is just slowing me down that much; I'll have to check later to see what bash-completion does behind the scenes).
I was going to post something more comprehensive on rapid prototyping and unit testing in C++, but that will have to wait until I get my RCS -> Git sh*t sorted out. In the meantime, have fun with bash. I know, I know, you zshers and kshers are probably silently chuckling at us knucklehad bash users, but hey, at least we're (finally) catching up, right?
More links to goodies to come soon! And maybe comments for the blog. Happy hacking!
posted at: 04:26 | path: | permanent link to this entry
So I'm typing along, when all of a sudden, my system comes to a crawl. Windows go blank, apps don't respond to key presses. Normally, I wouldn't be surprised. But wait a moment; this isn't Windows or OSX; it's Linux. WTF is going on? I check top:
top - 08:04:54 up 32 days, 21:15, 15 users, load average: 6.06, 3.74, 1.92
Tasks: 205 total, 1 running, 203 sleeping, 0 stopped, 1 zombie
Cpu(s): 1.6%us, 1.3%sy, 0.0%ni, 0.0%id, 97.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 4107088k total, 2512484k used, 1594604k free, 420900k buffers
Swap: 2654200k total, 450656k used, 2203544k free, 462820k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2569 root 20 0 140m 51m 12m S 3 1.3 322:06.24 Xorg
768 npsimons 20 0 190m 21m 10m S 1 0.5 5:36.34 gnome-panel
355 root 20 0 0 0 0 S 0 0.0 127:30.59 kcryptd
3454 npsimons 20 0 132m 8868 5948 S 0 0.2 4:51.28 metacity
4011 npsimons 20 0 134m 10m 6112 S 0 0.3 4:09.77 gnome-terminal
11960 root 30 10 2328 1180 488 D 0 0.0 0:02.63 sxid
The one thing that catches my eye is sxid, a security checking program I installed that automatically checks changes in status or permission on suid executables. But why is it running now, instead of late at night when I'm not around? Checking /etc/crontab, I find this:
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
WTF?! Who thought it was a good idea to start cron jobs at 06:30? And the entries for weekly and monthly are similar. Okay, simple, change them to slightly after midnight; if I'm still on the computer by then, that will be my indication to go to bed.
But then I have another thought: there's plenty that gets run from cron, and only sxid is causing problems; why? sxid is fairly new, but it should at least have a crontab that nice's it properly, right? No:
#!/bin/sh
SXID_OPTS=
if [ -x /usr/bin/sxid ]; then
/usr/bin/sxid ${SXID_OPTS}
fi
Programmers/packagers, here's a hint: even though your software may be "system" software that runs in the background, you still need to think of usability. Don't be like Symantec. This isn't Windows or iOS; our OS *can* walk and chew bubble gum at the same time. Don't abuse the privilege, and take a look at other cron scripts where use of nice(1) and ionice(1) are standard.
And yes, I should probably send a patch instead of just bitching. Just for reference, here's how I fixed it (which I partially cut and pasted from the debsums daily crontab):
#!/bin/sh
SXID_OPTS=
if [ -x /usr/bin/sxid ]; then
nice -n 19 ionice -c 3 /usr/bin/sxid ${SXID_OPTS}
fi
posted at: 08:50 | path: | permanent link to this entry
Went for a hike on Friday with other CLMRGers to Morris Peak. Not a real strenous challenge, although it did leave my calves a bit sore. Nevertheless, it's always good to get out in the fresh air and sunlight with friends.
posted at: 00:21 | path: | permanent link to this entry
"And some things, which should have not been forgotten, were lost." - "Lord of the Rings"
They say that those who forget history are doomed to repeat it. I've always wondered when I hear this, what's the cycle rate? Is it constant, or does it vary for different fields?
Joking aside, sometimes things which are "forgotten lore" can explain why things are as they are today. This is also an important point as to why public domain and freely accessible records (assuming there are records) are important: without an accessible record of history, how can we learn from our mistakes? How can we understand progress (or, lacking progress, simply the state of the world)?
One of the things that has piqued my curiosity was why Ravel had composed "Bolero" for not just B-flat tenor and B-flat soprano saxophones, but also the F-natural sopranino saxophone. Who had heard of such a thing? Everyone knows that saxophones come only in alternating B-flat and E-flat transposing models, except for that relic of history, the C melody. So, when it came time to play the F-natural sopranino part of "Bolero", I simply picked up my B-flat soprano and played it's part again; after all the passages were identical, right down to the phrasing and articulation (excepting that it was for an F sopranino).
Couple of years later, during a private lesson, my instructor points out "Universal method for the saxophone" by de Ville. Turns out, there were actually two series of saxophones, one pitched in F/C (the "orchestral" series) and the one we all know and love today in Eb/Bb (the "military band" series).
"Universal Method" is out of print, and if it hadn't of been for efforts like the one at The Internet Archive, knowledge like this might have been lost. And that's just stuff from the first ten pages! History, indeed, has much to teach those willing to listen.
posted at: 04:55 | path: | permanent link to this entry
Why have a blog? I've always thought that blogs are kind of egocentric, self-centered little pieces of fluff, not worth much. Blogs aren't really anything new; they're basically homepages, and I've had one of those long enough. So why have a blog?
I guess I'm trying to have a place to write down my thoughts. Of course, I could easily enough do that and keep them to myself. I guess I'm also trying to find a way to contribute, give back to this wonderful thing called the Internet. Because for all it's failings, I've found many, many good and worthwhile things there (and those are just two examples off the top of my head).
And yes, I'll admit, I'm trying to grab some attention, make my mark, or at least get you curious enough to look at my resume.
posted at: 02:08 | path: | permanent link to this entry
This is something I really keep meaning to tell people about: binfmtc. It may not seem like much, but being able to rapidly iterate a prototype, or just test something to learn about how it works is tremendously powerful. I think it was Brooks' in "Mythical Man-Month" who said that interactive programming should not be overlooked as a very powerful tool.
The real nifty thing, though, are the included example utilities, realcsh.c and realksh.c. No, those aren't replacements for Korn shell and C shell. They are actual scripting shells for C and, wait for it - kernel mode scripting! That's right, with root privileges, you too can be mucking about in kernel space, right on your very own commandline! Dangerous but awesome!
Just to give you and idea, here's my Template.cc I've been using over the past months to work on exercises from Thinking in C++, Volume 2:
/*BINFMTCXX: */ // -*- Mode: C++ -*- // Copyright (C) 2011 Nathan Paul Simons (C2T9uE-code@hardcorehackers.com) // // This program is free software; you can redistribute it and/or modify // it under the terms of the GNU General Public License as published by // the Free Software Foundation; either version 3 of the License, or // (at your option) any later version. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program; if not, write to the Free Software // Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. // // Alternatively, the GPL can be found at // http://www.gnu.org/copyleft/gpl.html // For copy(). #include <algorithm> // For std::cout and std::endl. #include <iostream> // For std::ostream_iterator<>. #include <iterator> // For EXIT_SUCCESS. #include <cstdlib> int main(int argc, char* argv[]) { using namespace std; if(argc < 1) return EXIT_FAILURE; copy(argv, argv + argc, ostream_iterator<char*>(cout, "\n")); std::cout << "Hello, world!" << std::endl; return EXIT_SUCCESS; }
Just 'M-x insert-buffer RET Template.cc' and you're good to start hacking! You can add compiler flags either to the line starting with /*BINFMTCXX: or put them in the environment variable BINFMTCXX_GXX_OPTS. I prefer the latter, as I use quite a few, which I initially picked by running 'g++ --help=warnings | awk '{print $1}' | sort | egrep "\-W"' on the commandline and weeding out the ones that didn't apply to C++ from there, then additionally eliminating flags that were more annoying than helpful (such as -Wabi, -Waggregate-return, -Winline, -Wpadded, and -Wunreachable-code). You can also link to external libraries this way, but it doesn't always work cleanly for linking to external object code (.o files); your safest bet is relying on libraries that only require header includes, or just including implementation source files (.cc files).
posted at: 00:12 | path: | permanent link to this entry
It's an old maxim that in writing, you should show, don't tell. Don't say a character is happy, have him literally jump for joy. To some extent, this precludes a third person point of view, obviating the need for an omniscient narrator. Sometimes that can be a good thing, to let your audience get a better feel for being in the situation you are describing, helping them suspend their disbelief.
This carries over into other fields. For instance, there are those who hold that any film that starts with with textual exposition has points against it. Some even go so far as saying that telling or showing too much can ruin the horror of an unseen nemesis (cf. Alien vs. Aliens).
Even more interesting is where it carries over to other seemingly unrelated fields, say web design. Or programming. Far too often, I will see code like this:
// Count the number of doodads. int ii; for(ii = 0; doodads[ii] != NULL; ii++);
Is it concise? Sure. Will it work? Maybe, if doodads has a NULL pointer as its last element. But is that comment really necessary? I've always believed that code should tell the "how" and comments should tell the "why"; if your code doesn't tell the "how", then it needs to be rewritten to be more understandable. Two simple changes could make this code better:
size_t num_doodads; for(num_doodads = 0; doodads[num_doodads] != NULL; num_doodads++);
Change the variable name and the comment becomes unnecessary. Better yet, if you are using an STL container, let it do the work for you:
const size_t num_doodads(doodads.size());
That's if you really need the number of doodads and are assuming it won't change; it's probably best just to use the member size() call everywhere you need it. Also, don't rule out iterators; they may seem like a silly concept at first, but the nice part is that they work with any container, so changing out containers becomes dead simple, with the proper forethought:
typedef vector<int> Container; typedef vector<int>::iterator Iterator; Container myCollection(10); for(Iterator itr(myCollection.begin()); itr != myCollection.end(); itr++) cout << (*itr) << endl;
By changing two tokens (the two vector tokens in the typedefs, to say deque or list), you can get different performance characteristics and tune for your particular application. Of course, you have already profiled the code to make sure that the biggest slowdown is because of using vector, right? In the end, though, this really helps readability, because to other programmers, it is clear that you are using a Container and you are iterating through all its elements; the underlying implementation doesn't matter.
Quotes to live by:
Programs must be written for people to read, and only incidentally for machines to execute. -- Abelson & Sussman, SICP, preface to the first edition
Any fool can write a program that the computer can understand. It takes a good programmer to write a program that other people can understand. -- Martin Fowler
I can't tell what the hell his code does, it's mostly comments. -- Adam Radford
posted at: 00:13 | path: | permanent link to this entry
I finally got around to installing blog (I hate that word) software. For now, I'm still learning how to use it, but you can expect me to randomly post about computers (software, Linux and programming mostly), music (performance, jazz and big band mostly), and the outdoors (hiking, climbing, search and rescue).
A little about myself: I play saxophone (alto and soprano) and clarinet in the local big band (which I also administer the website and mailing list for) and saxophone (alto, tenor, baritone, and soprano) and percussion (cymbal pit) in the local community orchestra. I've dabbled a very wee bit in music theory and composition, mostly electronica, but would like to get more into that someday.
I'm also a member of the China Lake Mountain Rescue Group. I like to hike, rock climb and backpack, plus it's nice to feel I've helped people.
For my day job, I write software. These days it's mostly C++ and preferably Debian GNU / Linux. I administer my own email and web server (which you're reading this from) and home network, also using Debian.
posted at: 07:00 | path: | permanent link to this entry