Upgrading Gitea Is Painful

I just wanted to upgrade a Gitea instance and, in the process, deleted old gitea binaries. After that, pushing to a repository did not work anymore, because the path to the gitea binary which is eg. used when the instance or a repository is being created, is being hard-coded into several files, and that file was just gone. So, in order to upgrade to a newer version of gitea, you have to do the following, assuming you run the service under the gitea user:

  1. In ~gitea/.ssh/authorized_keys, you have to adjust the path of gitea to the location of the new binary.
  2. Per repo, you need to adjust the path to the gitea binary in the following files:

    • hooks/post-receive.d/gitea
    • hooks/pre-receive.d/gitea
    • hooks/update.d/gitea

Maybe more files need to be changed, but at the moment, this seems to be enough to generally make things work again.

Links:

  • https://gitea.io

Why Do DigitalOcean, AWS & Co Not Default To Debian?

I just read Chris Lamb's platform[1] for this year's DPL elections, where he asks why enterprises do not use Debian by default. In this article, I want to give some answers, although I think Chris is very likely already aware of them, given his track record.

  • Marketing

    I think Chris is partially right: Marketing is important, whether we like it or not. The ArchWiki example that he mentions, shows that they manage to present relevant content in a very accessible manner. This has in part to do with their organization of the information, and also with them possibly keeping the information better up to date than we probably do (I frequently find better information in the ArchWiki myself.)

    Their styling is imho on par with ours, so the difference should lie elsewhere. This may be partially due to them using a different software which has a much bigger userbase than ours, which certainly does contribute to users findig it easier to work with, because they don't need to learn anything new - no new procedures, no new markup language, the software already feels familiar. In short, it conforms more to existing user habits because of the market share of that other software.

  • Commercial Viability

    In my professional experience, I found that there are a few factors which make other versions of Linux, particularly CentOS and friends, more attractive to enterprises like eg. AWS:

    • Our support cycle is too short.

      These enterprises like to have that 10 years of support and never worry about any ugprades, because after 10 years, you can usually safely throw the machine away. The impact is that the vendor, eg. Amazon, does not need to involve the customer about upgrading their application, which the customer usually does not want to do, and also does not allocate any budget to. The typical customer expects, that once he has his application deployed, it will continue to run without change until he decides to stop running that version of said software, and considers upgrades to be a waste of time an money. Also, both security updates and newer versions of some third-party software become available on older versions of such Linux systems without the need for a big upgrade. The former enables the vendor to say that his platform is secure, and that any breaches are solely the fault of the customer, while the latter enables the vendor to offer new features to the customer without requiring him to upgrade. As an example, I'd like to point to the availability of PHP7 on CentOS 6.8, which is from 2016, but does not deviate too much from even older versions of CentOS and thus require not too much re-learning, with their first 6.x version being released in 2011, alongside Squeeze.

      [2018-01] It looks like Snaps are addressing this problem.

    • As a corollary to that, there is a much clearer separation between the very small core distribution, and the large amount of third-party commercial software.

      Also, us having tons of software already included, which eat a lot of manpower, is an underemphasized, so it may not be obvious how Debian can make users' lives easier.

    • There is a certification system in place, that gives the enterprise some confidence about the abilities of any prospective hires. I am not aware of any certification system for Debian.

    • The boon and the bane of Debian is the non-commercial nature of it. There is no single commercial entity behind Debian, which results in enterprises not knowing whom to sue, or how long the project will survive. Nevermind that similar problems have occurred with many vendors in the past, but there is a vendor which could be sued, if need be. And it looks like they have enough government backing to not easily go bankrupt, either. But the distrust against volunteer organisations which are as loosely knit as Debian is, runs deep.

Links:

  • https://www.debian.org/vote/2017/platforms/lamby

Freie Software und das Militär

Ich lese oft und gern Fefes Blog, weil es einem eine Vielzahl von Nachrichten in aggregierter Form mit Links auf die Quellen zur Verfügung stellt, ohne daß man tonnenweise Reklame und Schlimmeres über sich ergehen lassen muß, aber eine Sache stößt mir schon lange auf: Bei jeder Gelegenheit fordert Fefe, daß die GPL um eine Klausel zum Ausschluß militärischer Anwendungen erweitert werden müsse.

Davon halte ich überhaupt nichts, und dem ist meiner Meinung nach ntschieden entgegenzutreten.

Begründung:

Zum Einen würde dadurch die Softwarelandschaft lizenztechnisch weiter zersplittert, und zwar in einer Art und Weise, die uns in die Zeit vor die Entwicklung der GPL zurückwerfen würde. Wenn die GPL nämlich um diese Forderung erweitert würde, käme der nächste Entwickler an und würde die Verwendung im Bereich Gentechnik, der Kirche, durch Autofahrer, Veganer, Farbige, oder wie auch immer einschränken wollen, und kaum eine Software wäre noch mit irgendeiner anderen Software kompatibel. Diese Art von Lizenzwirrwarr war vor der GPL üblich.

Wir haben ja jetzt schon Schwierigkeiten mit OpenSSL, jQuery und sicher noch einer Reihe weiterer Softwarepaketen, die Lizenzfragen aufwerfen bzw. eine Sonderbehandlung erfordern.

Dabei gibt es natürlich massive Abgrenzungsprobleme: Benötigt eine Fräse in einer Munitionsfabrik jetzt eine nicht-Fefe-GPLte Software, oder wäre ein derartiger Einsatz noch von einer derartig geänderten "GPL" gedeckt? Was ist mit Nähmaschinen für Schutzwesten? Für Uniformen? Was wäre, wenn die Bundeswehr die Software im Rahmen einer weiteren Flutkatastrophe zu Zivilschutzzwecken einsetzen will, oder wenn Widerstandskämpfer in Nordkorea (gibt es die überhaupt?) diese Software benutzen wollen, um gegen ihre Regierung vorzugehen? Was, wenn diese Widerstandskämpfer gerade gegen ein im Verhältnis deutlich liberaleres Regime vorgehen, wie etwa gerade im Nahen Osten, oder schon früher in Lateinamerika? Ich benutze hier in beiden Fällen das Wort "Widerstandskämpfer", um die politische Wertung aus der Frage herauszunehmen und die Diskussion auf die juristische Mechanik, so, wie sie sich mir als juristischen Laien darstellt, zu konzentrieren.

Dazu kommt, daß diese generelle Änderung unnötig ist, denn schon heute kann jeder seine Software nach der Art "GPL plus folgende Einschränkungen/Erweiterungen" lizensieren. Ein populäres Beispiel dafür ist die "OpenSSL-Ausnahme" oder die "Classpath-Ausnahme" (Wikipedia zum Thema).

Des Weiteren geht er von der Annahme aus, daß das Militär sich an eine derartige Lizenz halten müsse. Dagegen spricht jedoch alle Erfahrung im Hinblick auf staatliches Verhalten, speziell dann, wenn irgendwie der Themenkreis "nationale Sicherheit" berührt wird. Meiner Meinung nach ist davon auszugehen, daß alles, was diesen Leuten als genügend praktisch erscheint, im Zweifel ganz einfach requiriert wird, und daß sich kein Richter dagegenstellen wird.

Und zu guter Letzt sollte man auch den Aspekt der Selbstverteidigung nicht aus den Augen verlieren, denn nicht nur Fefe kann "militärische Anwendungen" definieren, die Staatsmacht kann das auch, wie wir schon bei der Auseinandersetzung um Verschlüsselung, und besonders um PGP/GnuPG, gesehen haben. Eine solche Fefe-Lizenz müßte demzufolge Klauseln beinhalten, die derartigen Versuchen einen Riegel vorschieben.

Aus meiner Sicht ist klar, daß die Staatsmacht und Unternehmen aus ihrem Dunstkreis die notwendige Erlaubnis quasi selbst ausstellen können, während etwaige nichtstaatliche Akteure wahrscheinlich keinen legalen Ersatz etwa in Form von QNX, finden können. Dabei sollte man bedenken, daß diese Konstellation, daß sich Bürger meinen, nur noch mit Waffengewalt gegen autoritäre Regierungen oder sonstige Angreifer zur Wehr setzen zu können, schon lange und in vielen Teilen der Welt, derzeit besonders deutlich etwa im Nahen Osten zu sehen, gegeben ist.

Dazu kommt, daß man etwaige Lizenzverletzer nur in extremen Ausnahmefällen verklagen können dürfte, wenn man denn der Lizenzverletzung gewahr würde, denn im Zweifel haben diese Personen(kreise) einfach mehr legale und physische Feuerkraft als der geneigte Softwareschmied.

Meiner Meinung nach sollte Fefe hier, wie bei anderen Themen auch, mit mehr Ratio und weniger Bauchgefühl an das Thema herangehen. Dann müßte er entweder seine Forderung fallenlassen oder zumindest erklären, warum nur militärische Anwendungen ausgeschlossen sein sollen - denn andere Anwendungen töten Menschen genausogut, nur nicht unbedingt genauso offensichtlich und spektakulär. Und er müßte meiner Meinung nach als politischer Mensch erklären, wieso sich diese Veränderungen in der Lizenzlandschaft gesellschaftlich positiv auswirken.

Links (Auswahl):

BT hijacks DNS queries

I just configured a new DNS name in one of my domains, which did not exist before. The associated IP number is routed to Germany. But while the name was not really up, the answer should have been NXDOMAIN, meaning that the name does not exist. Example:

$ dig blablablablabla.oeko.net

; <<>> DiG 9.9.5-8-Debian <<>> blablablablabla.oeko.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 38513
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;blablablablabla.oeko.net.      IN      A

;; AUTHORITY SECTION:
oeko.net.               139     IN      SOA     a.ns.oeko.net. hostmaster.oeko.net. 1021018254 16384 2048 1048576 2560

;; Query time: 10 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Thu Feb 12 21:33:53 CET 2015
;; MSG SIZE  rcvd: 105

But instead, they gave a fake answer:

$ dig bla.oeko.net

; <<>> DiG 9.9.5-8-Debian <<>> bla.oeko.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9013
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;bla.oeko.net.          IN  A

;; ANSWER SECTION:
bla.oeko.net.       20  IN  A   92.242.132.15

;; Query time: 32 msec
;; SERVER: 192.168.1.254#53(192.168.1.254)
;; WHEN: Thu Feb 12 19:55:14 GMT 2015
;; MSG SIZE  rcvd: 46
$

As a result, I am unable to check whether my DNS performed correctly, until they deceided to throw the fake answer away.

Of course, this has huge potential for censorship of all kinds, which I have seen in action elsewhere already. I am not the only person aggravated by this kind of behaviour. Please follow the link below to read other people's take on this problem.

Thank you!

Links:

  • http://linuxforums.org.uk/index.php?topic=11464.0

Typing Chinese on a Computer

Just today, I read an article about the influence of the computer on the chinese language. I can agree with some of the points of the author, but think that the difficulty of using a method like Wubi is generally overstated. CangJie is more difficult, but in contrast to spoken language, they both have the very valuable property of not changing according to dialect, region or time. The speedups a user of predictive input gains, are also avialable to users of handwriting or structure-based input methods, but the input speed should be excellent at 150 words, achievable in Wubi, or the 200 words achievable in CangJie. On top of predictive input and much less guesswork that makes the phonetic input methods slow, the structure-based input methods sport phrase books and rules for having hortcuts to type several characters in one go. And while I have seen every undergrad student using only PinYin or ZhuYin, every PhD student that I have met so far, has switched to Wubi, simply for the massive speed increase.

However, I am unconvinced about the notion that writing Chinese is slower than English:

If you can type 150 chinese characters per minute, that amounts to roughly 50 words per minute if you subtract particles and composita, as many chinese words have only one or two characters. Now, imagine how fast you'd have to type to achieve similar speed in English: If the average English word has four characters, which is probably not enough, you'd have to type at 600 characters per minute to achieve similar results, and then you have spacing, too, which does not exist in Chinese. I also hold that the structure-based input methods at least help you memorize the graphic elements of the characters, thus being closer to hand-writing than phonetic input methods. With the composition rules and phrase books, you end up usually having one to three key strokes to produce a chinese character. In summary, I think it is not easy to say whether English or Chinese can be typed faster.

Unfortunately, my own experience with Chinese input is limited to PinYin and Wubi, and as far as the steep learning curve goes, the principles of Wubi can be explained in probably one or three hours, and after that, it takes two weeks of practice to achieve some fluency. Not a big invest in comparison to learning Chinese in the first place, or the waste to be accrued over time using an inferiour method. I guess it is mostly the psychological barrier, possibly combined with unsophisticated didactics that contribute to the perception that these methods are hard.

Links:

Small Timezone Code Snippet

Today, I was looking at how to adjust a time stamp from a log file without a timezone info to contain the local timezone, so I can stuff a timezone aware value into a database. It turns out that this is a somewhat under-polished part of the Python standard library, at least as of Python 2.6, which I am using (don't ask why). While looking for a solution, I frequently came across code that used pytz , but I wanted something that would stay within the standard library.

So here's my hodgepodge solution to the problem, which should work in most of Europe:

import time

def getTimeOffset():
    offset = time.timezone
    if bool(time.localtime().tm_isdst):
        offset = offset - 3600
    stz = "%+02.2d%02d" % (offset / 3600, offset % 3600)
    return stz

This approach is a straightforward extension of the idea presented here.

New Blog Software, Links Changed

As you might have noticed, I have switched from MovableType to Pelican. As a consequence, the links in my blog changed - usually only a little, but in a slightly irregular fashion. Please peruse the archives and search for the title of your article. The content itself should all be there.

Thank you!

DNS: Open Resolvers, Revisited

Long has been the list of failures in ISPs and carriers to force borken DNS servers on their customers, thereby manipulating their customers traffic, or outright censoring what their customers can see. To combat such manipulations, and also to make it harder to observe their customers' behaviour, it has been a pet project for some, also for me at some time, to run an open resolver, that allows random people on the Internet to query your DNS server for an arbitrary name. Unfortunately, the evil guys developed an attack [0] that makes it impractical to run an open resolver. So, while politically desirable, it is unfeasible to run an open resolver, and network operators around the globe strive for shutting them down.

Now, these attacks all rely on the simple fact that, with UDP, you do not have any kind of assurance that the source address in a packet in fact belongs to the sending host. In my opinion, if you are willing to take the effort, there is one obvious way to provide an open resolver that does not have this flaw: For hosts not on your own network, provide DNS over TCP only.

I hope that someone will hack this feature into unbound [1], so people can easily deploy open resolvers in a reasonably safe way, without disrupting the Internet. Currently, unbound's do-udp setting is only a combined setting for incoming and outgoing queries, causing upstream name servers excessive load.

Thank you for reading!

[0] See eg. http://openresolverproject.org/
[1] https://www.unbound.net

Fixing the Android Update Problem - A Few Thoughts

Time and again, Android has been getting the heat for leaving its users in the lurch in the face of security problems, while fixing such problems only in the most recent version. But in my opinion, not only Google, but also the manufacturers, are to blaim for this situation: They are the ones who aim to lock down the devices with their Frankenstein'ed versions of Android because they think it's their selling point, or at least their way to more revenue.

The following suggestions relies purely on speculation, because I am not privy to any contracts, product design or marketing discussions on behalf of any party. But from all I know, the following approach could be used to alleviate the problem from the user's perspective:

Google should imho

  1. fix such bugs in as many versions of Android as required to achieve 75% market coverage, and
  2. adjust their contracts in a way so that manufacturers who desire early-access and support from Google, as opposed to simply warping AOSP, are required to offer these updates for all handsets that were originally shipped or are currently running with any of the fixed versions of Android, within two weeks time, lest they lose some kind of access to the program, and the right to use the Android logo. Compliance should be determined frequently enough to not water down these requirements.

This would have the following nice side effects:

  • Google gets rid of the blame for not supporting their users (see point 1).
  • The manufacturers can still avoid the huge and profit-eating work of supplying the users with new versions of Android, but are being pressed to at least not leave their users alone (see point 2).

By going this route, the manufacturers are not required to give up a part of their business relationship to Google, which would be hard to argue despite them doing it all the time towards the carriers (let's think about that battle later), while making sure that the users are safe, sort of (and relegate the general security debate about Android to a different debate, too), without making it impossible to market new devices with new versions of Android.

The current situation, which I'd liken to driving a car with broken brakes, would imho warrant compulsory recall actions on behalf the manufacturers, which they would otherwise be legally obligued to perform - at least as far as my understanding of German consumer protection laws goes. It would be somewhat interesting to see such a case being heard before a German court, and I'm far from confident that the Android brand will not be hurt while the problem festers.

I have the nagging feeling that I cannot be the first to have had these ideas, but wanted to state them nonetheless.

Firefox 30: Flash Always Plays After Upgrade

Recently, I have upgraded to Firefox 30 in order to profit from the security fixes. I was delighted with its much improved speed as well, but thoroughly aggravated with a number of very nasty bugs. Contrary to my previous experience, Firefox insisted on playing Flash videos instantly, of course despite me having had click-to-play etc. already enabled.

Anyway, to fix it, go to this, go to about:config and change the setting of

plugin.state.flash from the default value of '2' to '1'.

Save, and you're mostly set.

Unfortunately, I have found a number of situations where the browser insists on playing a video regardless, which I have not yet been able to configure away, although I have all obvious things configured to not auto-play.