How to manually update a deb package from source

Probably everyone has encountered a package in Ubuntu which was not the newest released version while one for some reason needed the newest one. The first step is to search for a PPA with the desired version. But what if there is no such PPA or you want to build the version yourself? This is where this guide comes in. Note however that this is not aimed at ordinary users – you need some experience with programming/ compiling to successfully build a package.

Before you start

Before you start make sure that you have source packages enabled in your software sources.
Next you obviously need the upstream source tar-ball of the new program which should look something like <packagename><version>.tar.gz.
Download this tar-ball to a new directory <somedir> and extract it there.

Updating Package info

For the following commands I assume you are in the previously created directory <somedir>.

First we need to get the old version of the source package

apt-get source <packagename>

This will download and extract the old source package into <packagename><oldversion>.

Now we need some helper scripts to perform the upgrading as well as the build-time dependencies of the package

sudo apt-get install dpkg-dev devscripts fakeroot
sudo apt-get build-dep <packagename>

Next change into the extracted sources of the old package and update the packaging

cd <packagename>-<oldversion>
uupdate -v <newversion> ../<packagename>-<newversion>.tar.gz

# change into the extracted new package
cd ../<packagename>-<newversion>

# update version info
dch -l ~ppa -D $(lsb_release -sc)

For more information see the Debian New Maintainers Guide.

Building the program

To trigger a rebuild of the program simply execute

dpkg-buildpackage

Uploading your version to a PPA

To upload a package to a PPA you first need to sign it to prove that you are the author. To do this you have to execute the following in the <packagename><newversion> directory

debuild -S

Furthermore you need the upload tool dput to actually perform the uploading

sudo apt-get install dput

Now change to <somedir> and execute

dput ppa:<your_username>/<repository> <source.changes>

You can find more information at Launchpad.

Secure Own-/ Nextcloud setup

update 24.04.2017 –  include Subject Alternative Name field
update 20.12.2017 – discuss Certbot as an alternative

While the Nextcloud Manual suggests enabling SSL, it unfortunately does not go into detail how to get a secure setup. The core problem is that the default SSL settings of Apache are not sane as in they do not enforce strong encryption. Furthermore the used default certificate will not match your server name and produce errors in the browser.

In the following a short guide how to manually set-up a secure Apache 2.4 server for Nextcloud will be presented.

Note: nowadays one can also use Certbot to automatically perform the steps below and validate your certificate so browsers accept it. However due to their certificate transparency policy, your host will be submitted to a public list. This may or may not be what you want.

Continue reading Secure Own-/ Nextcloud setup

How to root Android using Ubuntu

update 27.10.2018 – use TWRP instead of CWM (discontinued)
update 14.10.2017 – new instructions to set-up udev rules
update 26.02.2016 – instructions for Android 6 Marshmallow

The Big Picture

Android consists of three parts relevant to rooting

  1. the bootloader
  2. recovery system
  3. main system

typically only the main system is running, that is the Linux Kernel, the launcher, the phone app etc.. If we talk about rooting, that means we want to add an additional app to the main system which has access to secured parts of the system and acts as a gatekeeper for other apps that also want to get access.

The problem is the secured parts of the system are locked down – otherwise they would not be secure. This means that we can not simply install that app (e.g. an apk) from within the main system.

Therefore we have to go one level down. This is where the recovery system is. Typically you do not see it, as it is only active when the main system can not run – either because a system update is installed or because you do a factory reset.
As the recovery system can do a full system update, it means that it has also access to the secured parts of the main system – exactly what we need.
The stock recovery system obviously does not allow altering the main system – otherwise everybody could get your private data if you lose your phone.
So we need to replace it as well. But before that we have to talk about the bootloader.

The bootloader is a tiny piece of software which decides whether to start the recovery or the main system (or another main system, like Ubuntu Phone).
In the default configuration in only starts systems that it knows and trusts. In this configuration the bootloader is called locked.
Although this prevents malicious software to change the phone and spy on us, it also prevents us from replacing the recovery system. By the way, this concept is also coming to the PC where it is called UEFI secure-boot.

Here is a graphical overview of the Android components:

android-brs

So what we need to do in order to get root access is

  1. unlock the bootloader
  2. replace the recovery system
  3. install a superuser app

Note that unlocking the bootloader also allows attackers to circumvent any of the android security features (PIN etc). It becomes possible to access all the files on the device using a different recovery system. (unless userdata is encrypted)
Therefore android will wipe all userdata when the bootloader state is changed from locked to unlocked.

So if you lose your unlocked device or it gets stolen, you better hope the thief is not tech savvy.

Preparations

First you need to install the fastboot binary to be able to perform low-level communication with the device

apt-get install android-tools-fastboot android-tools-adb android-sdk-platform-tools-common

The android-sdk-platform-tools-common package most importantly contains a whitelist (/lib/udev/rules.d/51-android.rules) with devices to which users can send commands over USB, so you do not have to run fastboot as root.

Now you have to reboot into fastboot mode. Usually there is a key combination you have to press on startup.

Remember this key combination as you will need some more times.

Samsung Devices however, like the Galaxy S3, do not support the fastboot mode – instead they have a download mode, which uses a proprietary Samsung protocol. To flash those you have to use the Heimdall tool. While this article does not cover the heimdall CLI calls, the general discussion still applies.

Unlocking the Bootloader

last warning: this will wipe all user data on the device

for google devices, like a Nexus 4 or Nexus 7 it is just do

fastboot oem unlock

if you have a Sony Xperia device, like a Xperia Z, you additionally have to request a unlock key and then do

fastboot oem unlock 0x<KEY>

where <KEY> is the key you obtained.

Using AutoRoot to install SuperSU

There are several superuser apps to choose from for Android 4 and below. However the only superuser app working on Android 5/ Lollipop and above is SuperSU by Chainfire.

As there are devices like the Nexus 5X shipping with Android 6/ Marshmallow, I will describe this method first.

Chainfire created an “installer” called AutoRoot that includes the fastboot utility and will perform the unlocking step described above. However if you have read this far, you probably also want to understand the rest of the process.

First you have to download the appropriate package for your device. There you will find a recovery image which we have start with

fastboot boot image/CF-Auto-Root-hammerhead-hammerhead-nexus5.img

the command above will not flash anything on your device, but just upload the image and immediately start it. The image contains a script to modify the main system (change startup to get around SELinux) and install the superuser app.

If everything goes well, you can now just reboot your phone and you are done.

You could lock your bootloader again now to make your device more secure. However the next Android update will remove root again and repeating the rooting procedure will wipe userdata – so you have to balance security update vs. the risk of your device being stolen. For the latter case you still have the option to enable encryption of userdata though.

Installing OTA updates

Android over the air (OTA) updates contain only the changes to the current system. In order to verify that the update succeeded Android computes a checksum of the patched system and reverts to the old state otherwise.

As SuperSU has changed the boot image to start itself, the updates obviously will fail. So to install an OTA update you will have to grab a factory image and restore the boot partition using the included boot.img

fastboot flash boot boot.img

after this you will have to patch the boot partition again using the procedure described above.

Also note that if you use apps that change the system partition (like AdAway that changes the hosts file), you will have to revert those changes as well in order for the OTA update to succeed.

Optional: Replacing the Recovery System

If you want some advanced features, like backing up all your installed apks, you can permanently replace the recovery image on your device. However this will most likely prevent you from installing OTA updates.
There are two prominent alternative recovery systems with the ability to install apps

Clock Work Mod has been discontinued, so we will use TWRP. From the Website linked above download the recovery image which fits your phone.

fastboot flash recovery <RECOVERY>.img

where <RECOVERY> is the name of the file you downloaded. For instance for a Nexus 9 and TWRP 3.2.3 it would be

fastboot flash recovery twrp-3.2.3-0-flounder.img

restoring stock recovery

If you have a Google Device, you can grab the factory images here.  There you will find a image of the stock recovery. You can restore it by

fastboot flash recovery recovery.img

Alternative superuser apps

If you run a device with Android older than 5/ Lollipop you have some alternatives to SuperSU:

I would recommend getting Superuser by CWM, as it is open source and also nag-free as there is no “pro” version of it. There is even a pull-request which might make it also work with Android 5 in the future.

To install the app we need to get this zip archive and copy it to the device. Then we need to reboot into fastboot mode and then select “Recovery Mode” to get to the recovery system. Once in Recovery mode select

install zip -> choose zip from /sdcard

then browse and select the “superuser.zip” you just copied.

Once installed select

Go Back -> reboot system now

Once the system has started you should have a “Superuser” App on your device. Congratulations, you are done.

Debugging native code with ndk-gdb using standalone CMake toolchain

I recently ran into this problem and could not find any good solution on the Internet. So next comes a small summary of the problem with hopefully enough buzzwords, so Google can lead you here.

If you want to do C++ development on Android, you need the NDK for cross compilation. It comes by default with its own build system called ndk-build, which basically is a bunch of custom makefiles. But if you are sharing code between the Android Platform and lets say plain Linux, you have likely already a build system installed. For C/C++ CMake is quite popular as it supports different platforms and compilers. Fortunately there is already a project which adds Android support to CMake. I will not cover that – instead I assume you are using it already.

Unfortunately you cant use the ndk-gdb script supplied with the NDK to debug your application as it relies on the behaviour of ndk-build. But as said earlier, ndk-build is no wizardy, but just a bunch of scripts. So it is possible to emulate the behaviour using CMake, as following:

Add the following macro to your CMakeLists.txt file

macro(ndk_gdb_debuggable TARGET_NAME)
    get_property(TARGET_LOCATION TARGET ${TARGET_NAME} PROPERTY LOCATION)
    
    # create custom target that depends on the real target so it gets executed afterwards
    add_custom_target(NDK_GDB ALL) 
    add_dependencies(NDK_GDB ${TARGET_NAME})
    
    set(GDB_SOLIB_PATH ${PROJECT_SOURCE_DIR}/obj/local/${ANDROID_NDK_ABI_NAME}/)
    
    # 1. generate essential Android Makefiles
    file(WRITE ${PROJECT_SOURCE_DIR}/jni/Android.mk "APP_ABI := ${ANDROID_NDK_ABI_NAME}\n")
    file(WRITE ${PROJECT_SOURCE_DIR}/jni/Application.mk "APP_ABI := ${ANDROID_NDK_ABI_NAME}\n")

    # 2. generate gdb.setup
    get_directory_property(PROJECT_INCLUDES DIRECTORY ${PROJECT_SOURCE_DIR} INCLUDE_DIRECTORIES)
    string(REGEX REPLACE ";" " " PROJECT_INCLUDES "${PROJECT_INCLUDES}")
    file(WRITE ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ${GDB_SOLIB_PATH}\n")
    file(APPEND ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "directory ${PROJECT_INCLUDES}\n")

    # 3. copy gdbserver executable
    file(COPY ${ANDROID_NDK}/prebuilt/android-arm/gdbserver/gdbserver DESTINATION ${PROJECT_SOURCE_DIR}/libs/${ANDROID_NDK_ABI_NAME}/)

    # 4. copy lib to obj
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND mkdir -p ${GDB_SOLIB_PATH})
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND cp ${TARGET_LOCATION} ${GDB_SOLIB_PATH})

    # 5. strip symbols
    add_custom_command(TARGET NDK_GDB POST_BUILD COMMAND ${CMAKE_STRIP} ${TARGET_LOCATION})
endmacro()

Then use it like

add_library(YourTarget ...)
ndk_gdb_debuggable(YourTarget)

You should now be able to use ndk-gdb with CMake, just as if you would have used ndk-build.

Note that steps 4 and 5 are optional for debugging. They just reduce the size of the library that has to be transferred to the device. If you dont care, you can just leave them out. But then the solib search path from step 2 must be set to:

file(WRITE ./libs/${ANDROID_NDK_ABI_NAME}/gdb.setup "set solib-search-path ./libs/${ANDROID_NDK_ABI_NAME}\n")

Ideally someone should integrate that in the Android toolchain linked above.

Update Merged Upstream

GNOME Project suffering the NIH disease

When I first read about GNOME dropping support for BSD and Solaris, my impression was that this is a good idea to aiming to unify limit resources and get the work done. I was also excited about the idea of the GNOME OS. I think it is necessary to keep the big picture in mind when developing the different components. Previously Ubuntu was the only project that did this and it was also the reason why I started using Ubuntu. Because it made the different parts of Linux work together to achieve the big goal of a great overall system.

But then things started to go wrong. Instead of picking existing components and giving them the final polish like Ubuntu did before, the GNOME project started developing things from scratch without any apparent reason to do so. And even worse: incompatible to existing solutions. It started with the rejection of the appindicator specification implemented by Ubuntu and KDE. At that point it was not clear to me whether the specification was broken or whether the responsible people at GNOME were just ignorant.

Then came systemd. And it started to be apparent that unfortunately it was the latter. To my knowledge Ubuntu is the biggest deployment of GNOME and it is based around the Linux ecosystem. So dropping support for Ubuntu has nothing to do with unifying limited resources. Ubuntu is your target audience, so if you should try to collaborate with a project you should collaborate with Ubuntu. My opinion on that is that some Fedora developers were pissed that the Unity interface was exclusive for Ubuntu and instead of packaging it for Fedora they started making GNOME Shell exclusive for Fedora.

Next I read about the overlay scrollbars re-developed for GNOME. While the first reaction might be the developers simply do not want to use Ubuntu technology, I think the reason is different. The developer does not seem to have any antipathy towards Ubuntu and if we look at the project he developed the scrollbars for another explanation becomes visible.

But first lets take a step back. Lets take a look at the core of GNOME. By this I mean the programming language it is written in. It is C/GObject; plain C extended with naming conventions and libraries to allow modern paradigms such as object oriented programming and events/ observer pattern. From today’s perspective one might wonder why one should choose this over C++, which integrates most of the features at the language level. But back when the GNOME project started C++ was not mature yet which meant that your program might break with the next compiler update or even the next STL update.

Therefore basing your project on plain C was a good idea. But a few years back it became obvious that programming in C/GObject seriosly lacked behind more modern programming languages like C++, Java and C# for application development.

Unfortunately instead of moving the straightforward route from C to C++, which most of C developers took when C++ matured(that was about 10 years ago), Vala was born.

So instead of using a proven and mature foundation, a new layer of indirection was created to essentially provide the same feature set. Commonly this is referred to as the “not invented here” symptom. A more derogative phrase would be reinventing the wheel..

What is sad here is that being an open source project, GNOME disregards the biggest advantage of open source software, namely standing on the shoulders of giants. With open source software you can use take an existing solution and improve upon it. This way you get the base functionality as well as the bug fixes that went in it for free. If you would develop it from scratch, you most likely would have to fix the same bugs again yourself.

To sum up here is what GNOME is losing right now

  • 30 years of language and library experience by using Vala instead of C++
  • 5 years of deployment and bug fixing by using systemd instead of extending upstart
  • 1 year of development testing and design if they reimplement overlay-scrollbars
  • 8 years of foundation development that went into Eclipse, by developing Gnome Builder from scratch
  • but most importantly: the synergy effects by collaborating with others

Do not get me wrong, I am not saying that the GNOME solutions could be replaced by existing solutions – I am saying that by extending existing solutions the GNOME project and the free software landscape would be better off as a whole.

Tablet PCs – a chance for Maemo?

I just have watched the Apple iPad announcement and I have to say that I am quite impressed by the Apple marketing team. Before the film I could not think of a good use case for a oversized iPod, but after the film I have to say that Apple greatly refined the use case of the netbooks as a second PC.

Instead of putting an ordinary OS into a differently shaped device, like Microsoft is seemingly doing with the Slate, Apple adjusted the OS to the new use case.

If you have a much smaller screen and a much smaller keyboard, like you have on netbooks, you don’t want to write long articles or aim for the tiny buttons of ordinary user interfaces. Instead one should of a netbook like a playback device, which only requires rudimentary interaction.

As apple is great at streamlining stuff, they simply left out the keyboard and used a modified version of the iPhone OS, which is optimized for easy usage and – voilla here comes the computer you actually want to use in your living room, to quickly peek on facebook or your mail inbox.

But there are two big disadvantages that come with using the iPhone OS. First it is stripped down to much; there is no multitasking and no system clipboard which takes a lot of the convenience you have when using a real OS.

And second you are again locked-in by apple. If you use the iPad, you are also more or less forced to use iTunes for your music, iBook Store for your eBook and the AppStore if you want new Software.

Of course you might be able to Jailbreak the device and use third-party software but this will be nowhere as convenient as using the defaults. This is Apples Achilles heel and where Maemo can triumph.

With Maemo you basically have a full-fledged Linux with a easy to use UI. You have multitasking, you have a system clipboard and most importantly you have an open software repository – and all of this very well integrated in the UI.

You can freely choose your email provide, your music player and even the format you save your music in. And even though Nokia does not support OGG by default, the open nature of the OS allows it to be just as integrated as everything else.

Actually Nokia only has to build a Internet Tablet with the size of the iPad…

4 Years Later

Nearly 4 years ago Jon Smirl, the author of the experimental Xegl server wrote a nice summary about the state of linux graphics. He did that after realizing that it take much more time than he could out into Xegl in order to fix the linux graphics, so it is a realistic summary.

The interesting thing is that in his vision he described an X server which would run without root privileges and use OpenGL for all its drawings. Number one on his todo list though was a memory manager for DRM, which would enable all the nice stuff.

Well it seems that coming with Karmic we will finally see that first step done; most of the users(Intel and AMD) should get a nice in kernel memory manager which will be visible by Kernel Mode Setting. So what can we expect next?

This diagram shows the dependencies of the various features. (Source) Green is what we should get with Karmic. The graphics-memory manager in kernel allows moving mode setting there too, so the resolution is set only once during bootup (flicker-free). Since the graphics card is now controlled from one hand it also allows dynamic power management to be done and does not require root privileges for X, which results in better security. Memory management also allows giving 3D applications their private front buffer, so they dont conflict when rendering. (RDR, Redirected Direct Rendering) Which is basically the core of DRI2. It also allows supporting memory related OpenGL features like VBOs or FBOs(Vertex/Frame Buffer Object) which brings the compliance level of MESA up to OpenGL 1.5.

Theoretically everything up to OpenGL3 could be implemented now, but since of the old design of MESA everything would have to be implemented for each driver over and over again. That is why Gallium3D was created. It is a intermediate level API which abstracts from the hardware by just offering a  state tracking interface. On top of that it is possible to implement more sophisticated APIs like OpenGL, OpenCL or a Video Decoding API like VDPAU. These implementations can then be shared across all drivers which are built on the Gallium3D architecture.

Well it is still a way to go until we get these grey boxes, which Windows and Mac basically already had 4 years ago…

Gnome Online Desktop

There was a lot of talk recently about how Gnome should embrace the online services in order to keep up with the Web 2.0 development. But sadly most of the ideas were like “lets integrate better with web service <foo>” – the problem is that I do not want to start using Google Calendar from now on; I like the way Lightning handles my dates and I like that I have them available offline. What I would like would be able automatically synchronise them, when an online connection is available.

But we are already too focused on the data layer right now, so lets take a step back and see where we are.

We are here

What we currently have are web pages or better web services with interfaces described with XHTML and styled with CSS and we have local applications with interfaces described with XML and (eventually) styled with CSS. So the UI done pretty similarly already – although it still looks quite different, since there are standard web-page widgets and since most of the web-pages are drawn inside the browser, while local applications are drawn as separate windows on the desktop.

The difference

One might think that the difference might be the place where computation happens, but actually the computation happens in both cases on your local machine – it is just that Java Script is the only language you can use for web services and it is pretty limited right now, but things like Tracemonkey and Google Gears are creating a platform for computation intensive application delivered over HTTP.

And that is also the main difference; the way applications are deployed. Because web-services can be updated every time you reload the page, you can easily keep you customer up to date, while for local applications you often have to deal with out of date applications and updates. This is especially a problem on non Linux platforms, where you do not have a central package(application) manager. But on Linux we have that advanced systems for update delivery so this is not really a problem.

The real difference

What everyone is doing right now is rewriting existing and working code in Java Script in order to solve that one delivery problem, which is really not that big on Linux while the rewriting task is quite huge. And everyone starts using that new and shiny web services – but the question is why. The real benefit which web services offer over current local applications is centralized and therefore synchronised memory; you can log into Google Calendar from every PC you have access to and always see your up to date dates.

So we need

So basically what is missing from Gnome is that centralized memory, but the problem is that an open source community can not easily offer that because someone has to pay a lot of money for a lot of servers. So we still have to integrate with the existing web services. Nothing new here. But that is not a problem either, since there are already web services for everything one might imagine.

What we have to consider here, is that we have to be able to switch easily between different services like lets say Picasa Web Albums and Flickr. We could even define an open API for web service interaction which the services could implement then.

The master plan

to me it makes no sense re-implementing everything in JavaScript just for the sake of being able to run it inside the browser. We have already a variety of better languages available, when running on the local machine and there is also much of the infrastructure ready.

So instead we should make the existing applications more web-aware. Luckily there is already a framework made exactly for this: Telepathy. Currently Telepathy only manages your web-presence status and allows local applications to work with it. But it should also be able to manage your web-pictures and web-calendar and make it accessible to local applications similarly.

Then F-Spot could easily show you which of your pictures are currently published and which are only available locally without making a difference between Picasa Web Albums and Flickr – you should only care whether your pictures are synchronised or not.

The next step would be to actually take advantage of such a framework. If you currently want to write a text you have to care whether you write it for offline or online usage. If you write it for offline work, then things are pretty easy; since Word is not available on Ubuntu you use OpenOffice. But if you write it for online use you have to deal with a bunch of different interfaces – depending on what your web software is you have to deal with the WordPress text editor, the Joomla text editor and so on. And they all have in common that you cant save your text to your local computer easily.

The initial task was just to write a text – why have do I care of such things at all? Would it not be great if I just could open a local running text editor, write my stuff and then decide whether I want to save it to a file or publish it to a web service of my choice.

A locally running web-aware application could do that while offering a consistent interface for text editing and doing a lot of things which you can not inside a browser like launching Gimp in order to edit that picture you need in your article and then automatically upload it.

So basically we have to move things offline in order to get an online desktop and then we can also use browser for what they were meant for; reading text.

Exit mobile version
%%footer%%