Incorporating 3D Gaussian Splats into the graphics pipeline

3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.

Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].

In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.

Storage

The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.

For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.

Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.

Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:

Results using a non Gaussian-splat aware renderer

By directly storing the covariance as previously mentioned we can reduce the precision from float32 to float16, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16 for position data, yielding additional storage savings.

With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.

Blending

The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.

Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one) and by multiplying color and alpha in the shader as color.rgb * color.a. This results in the following equation:

\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}

However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.

A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha) for the following blending equation:

C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}

This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.

References

  1. Kerbl, Bernhard, et al. “3d gaussian splatting for real-time radiance field rendering.” ACM Transactions on Graphics 42.4 (2023): 1-14.
  2. Green, Simon. “Volumetric particle shadows.” NVIDIA Developer Zone (2008).
  3. Zwicker, Matthias, et al. “EWA splatting.” IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.

Steam Deck SSD Upgrade

If you, like me, went with the entry level Steam Deck option with only 64 GB of internal storage, you likely realized quite soon that some games wont fit on it.

One option is to use the microSD expansion card slot. For current-gen games the throughput of only about 150 MB/s does not seem to degrade loading performance compared to a NVMe SSD.
However, given that the internal storage is upgradable, the only logical choice for keeping your PC master race status is to cram in the fastest NVME SSD inside that thing.

Specifically, you will need a one-sided SSD in the M.2 2230 for factor so it fits the space inside the Steam Deck.
I went with the KIOXIA Client-SSD BG5 512GB. Kioxia is the Toshiba spin-off for SSD drives, if you wonder about the brand. Although it is a PCIe 4.0 drive, its peak read throughput of 3.5 GB/s is within the practical limits of PCIe 3.0 of the Steam Deck.
Also, the active power consumption of 4.1W is quite close to the 3.8W drawn by the custom PHISON PS5013 E13 SSD that Valve uses.

You can follow the iFixit Guide for the steps to actually swap the SSD. Make sure to transfer the ESD shielding wrap to the new SSD.

To get Steam OS on the new drive, follow the official recovery instructions and select the “Re-image Steam Deck” script.
This will install Steam OS on the blank SSD – similar to how you would install Ubuntu from a live USB.

Benchmarking results

Next, I wanted to actually compare the speed of the upgraded NVMe SSD with the one of the stock eMMC memory. To this end I used KDiskMark – an open-source alternative to CrystalDiskMark that runs on Linux natively.

The tests were performed on SteamOS 3.3.1 using KDiskMark 2.3.0.


In short, the NVME offers roughly one order of magnitude faster throughput over the eMMC.
Whether you feel this in-game, highly depends on the given game. For older titles, even the eMMC is so fast, that you cannot read the hints on the loading-screen. However, for something like the Flight Simulator 2020 that shuffles huge assets around, it will surely be noticeable.

Finally, the peak read performance of 3.5GB/s is not reached. This might be due to the PCIe 3.0 bottleneck – I did not bother putting the drive in a PCIe 4.0 device. Still, there is a significant advantage in writing performance over the older Kioxia BG4 series, that only do 1.4 GB/s.

Drifting with WLtoys 284131 Mini RC-Car

In this post I will discuss how to convert the WLToys 284131 (new K989) into a drift-car.

Table of Contents

Overview

This car is the latest iteration of the K989 (rally car) platform, of which there is also variant specifically for drifting, namely the K969 porsche.
However you still should go with the more recent 284131 as it comes with an upgraded radio that has no dead-zone when compared to the previous one. This will give you better control of the car.
Additionally, the 284131 now has metal ball-heads on the shocks and on the servo horn which allow those parts to move more smoothly.
Then, it comes with a preinstalled light-kit, that helps guessing the direction of the car from far away.
Also, some adjustments were made compared to the K989, to cope with the heat generated by the motor; the transmitter module was rotated by 90° to move it away from the motor and the motor pinion is now all brass, making it more heat resistant.

The only downside is really the ugly hoonitruck chassis, but at least this will be authentic after we do the drift-conversion.

The included battery lasts for about 20min and can be fully charged in about 25min, if your USB charger can deliver 2.5W. If you use an USB port older than 3.0 charging will take much longer.
Note, that even if you get a kit with multiple batteries you should take a break of about 10min between runs to allow the motor to cool down. Otherwise it will break much faster.

Drift conversion

Out of the box, the 284131 is tuned for fast acceleration and handling at high speed

  • The differentials are so stiff, that you can consider them locked. This gives you best acceleration
  • The turning-radius is limited which prevents flipping over at high speed
  • The stiff shocks reduce body-lean, additionally lowering the risk of flipping
  • Traction is mainly achieved by the grippy rubber-tires

For drifting however, we generally run at lower speed and need precise handling there. This basically means undoing all of the choices listed above.

Drifting with the changes suggested in this post

Some of the changes are easy to do, others are more involved

  • Replace the rubber-tires by some hard-plastic ones. We must get rid of some grip to be able to slide sideways. I suggest just going with the K969 tires, that only cost about 5€.
  • Remove the spacers from both front and back suspension to make it soft. This will increase forward grip while drifting.
  • Use the upper hole on the servo-horn to get a tighter turning-radius. Unfortunately the stock ball-head does not fit in the upper hole and you cannot get the old servo-horn any more. I suggest using the “MINI-Q 3.5mm ball-head” instead. It will set you off by about 3€.
  • Most crucially, we need locked differentials in the back and opened differentials in the front. The locked differential will cause the back to lose traction and drift. Contrary, the open differential will keep traction and allow us to control the drift.
    This is a difference to the K969, where both differentials are locked and the car merely slides (like on ice) instead of drifting.

The good news is that the stock diffs are so stiff that you can just keep them in the back and they will behave as if they were locked.

Making differentials work

The bad news is that getting actually working (i.e. open) differentials is not that easy. The cheapest option is to dissemble the stock one and loose it up. For this, I recommend using a (3mm) drill to widen the diff housing. If you try to use sandpaper on the diff arms, you will probably not make it uniform enough to run smoothly.


One thing to watch out when adjusting the diff is that both diff-arms have the same resistance. You can hold down the center and rotate each arm to test this by hand. Also, when assembled, the car should accelerate in a straight line.

If you dont want to go through the hassle, you can also just buy the Mini-Z MD005 diff (15€) and a pair of extended 11mm swing-shafts (10€) to compensate for the shorter diff arms.

If you want the best diff possible, you can go for the Mini-Z MDW018 ball diff or the MDW017 one way diff. Especially the latter gives you even better controls for drifting. However each of those costs as much as the whole 284131 RTR kit.

Lipo tester for storing the batteries

When ordering stuff anyway, make sure to also get a Lipo tester. Those cost about 2€ and allow monitoring the charge of the battery. This is useful when you want to take a break for a few days. In this case the battery should be at 3.8V per cell. Otherwise you risk permanently damaging the battery. To get there, you can keep the tester connected to the white-plug while driving and set the beeper to that voltage. If the beeper is too loud, you can dampen it by putting some cotton wool into the housing.

Bad upgrades

There are also some bad upgrades you can buy. Those either are wither unnecessary or actually worse than the stock parts. Particularly, this concerns the metal replacement parts. Metal parts are harder to manufacture at high precision, so you might actually degrade the performance by installing them. Also, they make the car heavier and thus decrease acceleration.

Generally, I would say that you do not need any of them for drifting. However, if you do touring and any of the plastic parts break, you might consider replacing those with a metal equivalent.

All metal ball differentials

Stock diff, good pinion – Metal diff, eaten pinion

You can get a all-metal ball diff on Aliexpress for about 8€. After some run-in those work very well and are smoother than what you get by fixing the stock ones.
Unfortunately those all-metal cogs (which are also shorter then stock) will eat-up the plastic center-shaft pinion in no time as we have high traction on the front wheels when drifting.

Computing replaygain for your Music library

TLDR; command at the end of post

If you want a equal loudness for your Music library the go to solution and the de-facto standard is ReplayGain.
If you are using a music streaming service, the provider is typically taking care of that for you – but maybe you want to migrate towards your own streaming solution.

ReplayGain analyses your audio files and stores their deviation from the baseline loudness as a tag. A compatible audio player can then read the tag and correct the playback volume so all you tracks have the same loudness.

Of course things get messy once you look at details like what the baseline loudness should be and how to determine loudness in the first place. Therefore we set the baseline once and for all as 89db and consider even tracks of the same album individually. If you disagree, feel free to branch off reading up the details now.

The next issue is that ReplayGain was born in a time where mp3 was synonymous to digital music, hence the algorithm was first implemented as the mp3gain CLI tool. Nowadays you also need aacgain and vorbisgain to cover all your formats, which is cumbersome to automate.

The larger issue with ReplayGain is that it defines loudness of a track by its peak volume. While a sane choice in theory, in practice the music and advertising industries raced to increase the perceived loudness without raising the peak volume. As broadcasters also used peak volume normalization, one could blow your eardrum with that very special advertisement.
Therefore the EBU R 128 was proposed which at its core is RMS based, meaning it is considering the average volume of the track.

Remember that ReplayGain merely adds a correction value to the tracks? This allows us to compute that correction value based on the R128 algorithm for a better normalization, which is exactly what the <a href="https://github.com/desbma/r128gain">r128gain</a> tool does.
Being written in modern day, r128gain also processes all possible audio files by hooking into ffmpeg as a filter.

So, without further ado, this is the command to normalize your Music library:

# pip3 install r128gain
r128gain -p -r Music/

This will preserve "-p" the file timestamps and recursively "-r" process all files in the given directory.

Trouble shooting

Note that if you previously used mp3gain, your files might contain non-standard lower-case replaygain_* tags, while r128gain will only write REPLAYGAIN_* tags.
To avoid confusing players with different values, you should remove the non-standard tags. This can be automated with eyeD3

eyeD3 -Q --remove-frame RGAD --preserve-file-times --user-text-frame=replaygain_track_gain: --user-text-frame=replaygain_track_peak: --user-text-frame=replaygain_album_gain: --user-text-frame=replaygain_album_peak: Music/

Refer to its documentation for the meaning of the parameters. For RGAD see here.

Header Image: “volume” by christina rutz (CC-BY-2.0)

Breaking free of Google

This post will be for those of you that care about privacy – i.e. if you want that information about you is exclusively under your control.
In that context not only Google is to blame, but actually most of the cloud services we know and use today.

Still Google will serve us as a nice placeholder as it is the market-leader when it comes to providing free services in exchange for your user-profile that Google in turn uses to sell target advertising. Even if you are fine with that, Google is also infamous for killing services – which might hit the one you rely on eventually.

As the world is moving mobile-first, a prerequisite for replacing a service is that we can easily integrate the replacement with an Android device.
Some might wonder why I chose Android here, given that it is made by Google. See, the problem is not who makes a service/ device, but who controls it. And with Android the main leverage for Google is bundling its services. If you take them away, the device itself is fully under your control – in contrast to Apple/ iOS.

Table of Contents

Now, that we set the stage, lets start with:

Nextcloud

At the heart of our efforts will be Nextcloud. This started as an open-source alternative to Dropbox/ Google Drive, but is nowadays grown into a platform for a plethora of different services.
The main selling-point is that you can just install Nextcloud on your own machine – ensuring that your data stays private. The software being Open-Source also means that it is not under control of a single corporation – in fact Nextcloud was forked from its predecessor Owncloud after a corporation tried to put the screws on its users.

Note, that if you are serious about this, you will need to invest around 500€ to get a machine for hosting that is decently fail-safe. If you rather want to be cheap, you can also just use a RaspberryPi to get away with less than half of that amount. For inspiration, you can take a look on the built I use or on the list of commercial nextcloud device providers.

Files & Photos

By using the Nextcloud Android App, you can directly replace Google Drive/ Dropbox as this is the core functionality of Nextcloud.

Additionally, the App allows you to automatically back-up your Photos/ Videos and free the local storage so you can stop using Google Photos too.

Contacts & Calendar

Nextcloud also supports Contacts and Calendar out of the box. To integrate them with into your Android Device there is the DAVx5 app. This app will function as an additional data provider, so you can just continue using the stock Google Contacts and Calendar apps. Those will, however, stop sending any data to Google.

This is especially useful, if you run a small-business and must ensure that your customer data is private according to the CCPA/ GDPR.

Nextcloud News

An often overlooked part of your privacy is Google News (also part of Google Now). Each time you view an article there, Google can mine your interests and political views – similar to what Facebook does. And by now you should know where these things can lead.

Another drawback here is that the Google Algorithm will create a bubble for you by only showing content coherent with your current world-view.
I still prefer to manually choose the news sources – to create that bubble myself.

To do that, one usual subscribes some Web-feeds using a Feed aggregator like the Feedly service – similar to what Google Reader used to offer, before being killed by Google.

The go-to app here is Nextcloud News combined with the Android App for a pleasant mobile usage.

Music

Most of you are probably just stream music via Spotify/ Youtube Music, but keep in mind that these services merely rent the songs and as such they can arbitrarily disappear from your library.

Therefore, I like to have my own copy of the song. Unfortunately it is very inconvenient to juggle mp3s around for getting the music to various devices.

Google Play Music used to offer best of both worlds for me, where you could upload your own music files and manage your playlists in one place. Additionally, you could make the music available offline by pinning individual playlists on your device.
Unfortunately, that concept did not allow Google to arbitrarily inject ads into my music stream and therefore the service got killed as well.

Nextcloud Music to the rescue! This app picks up your music library via Nextcloud files and allows to stream that via the Browser or the Subsonic API.
This is where the DSub Android Player takes over. As with Play Music, you can either stream the library or pin individual songs/ playlists for offline use.

Other

If you clicked on the links above, you probably noticed the F-Droid alternative app store for Android.
Getting your apps there ensures that you are using verified packages and open-source software. You can easily use it alongside with the Play Store. If that is too inconvenient for you, all of the above apps are also available via the Play Store.

Finally, there is the web-browser. If you do not log in with your Google Account, using Chrome is mostly safe. However, I suggest switching to Firefox. See the my original article on that topic for details. In short; the main reason is the availability of extensions. Those allow you to block ads on the mobile web too and use Youtube in the backround.

Header Image: Digital Chains by stanjourdan (CC-BY-SA-2.0)

Mould King 13106 Forklift Review

I got myself the MouldKing 13106 Forklift, which is based on the MOC 3681 by KevinMoo and wanted to share my impressions with you.

First of all, MouldKing actually improved the set by exclusively using back technic pins instead of the blue ones like in the MOC. Also they are officially cooperating with the MOC designer – so he is likely getting some share of the sales.

The set comes with “New PowerModule 4.0″, which means it supports proportional output. If you use the new joystick controller (like I do in the video) or use the app, you can have smooth controls of the motors and not just binary 0% or 100% throttle as with the standard remote.

As you can see, I actually put on some of the stickers. Some purists never do anything like this, because they argue that after some time the stickers start peeling off and look used. This is certainly a good point if you are building a sports-car – with a Forklift however, I would argue broken stickers add to the looks.

Compared to the original MOC, Mould King removed the lights, but added a pallet similar to the one found in the Lego 42079 Forklift.

Interested in getting the set? Support this Site by using the following affiliate Link:

Manual errata & comments

Generally, I prefer the Mould King manual to the original by Kevin Moo as I like renderings more than photographs. However, its nice to have the original at hand if something looks fishy. While building, I noticed the following:

Step 34: Cable-management is almost completely skipped in the manual. I laid all cables through the opening behind the threads. This keeps them out of the way later. The fiddle through the cables of the motors, that you add at steps 52 & 55.

Step 100: The battery-box position is wrong. It will collide with the bar you added at step 96. To make it fit, just rotate the battery-box by 180°.

Also, the direction of motor A has to be reversed. Press and hold left-shoulder, up and down for 3 seconds for this.

Step 111: The arms that you added in steps 89/ 90 should be oriented upwards to hold the footstep.

Step 143: Use a black bush instead of the 2-pin-axle beam, so things look symmetrical. This is a leftover from the original MOC, which squashed the IR receiver in there.

Step 156: Attach the levers to the front console at step 173 instead of attaching them to the seat here. After all they are supposed to control the fork and not the backrest.

Step 214: I suggest using gray 2-axles at step 230 instead of the suggested whites. This way the front facing axes will be all gray. For this just use white 2-axles here. Those wont be visible at all anyway.

Step 277: When adding the fork to the lift-arm, make sure that it has as much play as possible. Otherwise the fork will get stuck when moved all the way up.

Interior with fixes at step 143 & step 156

Step 278: Do not fix the threads yet. Wait until the end so you can correctly measure the lowest position of the fork (which gives you the length of the threads).

Step 286: Make sure that the 3-pin pops out towards the 8-axle. This will make joining things at step 288 much easier.

Comparing water filters

Lets say, you want to reduce the water carbonate hardness because you got a shiny coffee machine and descaling that is a time-consuming mess.

If you dont happen to run a coffee-shop, using a water-jug is totally sufficient for this. Unfortunately, while the jug itself is quite cheap, the filters you need will cost you an arm and a leg – similar to how the printer-ink business works.

The setup

Here, we want to look at the different filter options and compare their performance. The contenders are

NamePricing
Brita Classic~15.19 €
PearlCo Classic12.90 €
PearlCo Protect+15.90 €

As said initially, the primary goal of using these filters is to reduce the water carbonate – any other changes, like pH mythology, will not be considered.

To measure the performance in this regard, I am using a digital total dissolved solid meter – just like the one used in the Wikipedia article. To make the measurement robust against environmental variations, I am not only measuring the PPM in the filtered water, but also in the tap water before filtering. The main indicator is then the reduction factor.

Also, you are not using the filter only once, so I repeat the measuring over the course of 37 days. Why 37? Well, most filters are specified for 30 days of usage – but I want to see how much cushion we got there.

So – without further ado – the results:

Results

NameØ PPM reductionØ absolute PPM
Brita Classic31%206
PearlCo Classic24%218
PearlCo Protect+32%191

As motivated above, the difference in absolute PPM can be explained by environmental variation – after all the measurements took place over the course of more than 3 months.

However, we see that the pricing difference is indeed reflected by filtering performance. By paying ~20% more, you get a ~30% higher PPM reduction.

The only thing missing, is the time-series to see beyond 30 days:

As you can see, the filtering performance is continuously declining after a peak at about 10-15 days of use.

And for completeness, the absolute PPM values:

How to generate random points on a sphere

This question often pops up, when you need a random direction vector to place things in 3D or you want to do a particle simulation.

We recall that a 3D unit-sphere (and hence a direction) is parametrized only by two variables; elevation \theta \in [0; \pi] and azimuth \varphi \in [0; 2\,\pi] which can be converted to Cartesian coordinates as

\begin{aligned} x &= \sin\theta \, \cos\varphi \\ y &= \sin\theta \, \sin\varphi \\ z &= \cos\theta \end{aligned}

If one takes the easy way and uniformly samples this parametrization in numpy like

phi = np.random.rand() * 2 * np.pi
theta = np.random.rand() * np.pi

One (i.e. you as you are reading this) ends with something like this:

While the 2D surface of polar coordinates uniformly sampled (left), we observe a bias of sampling density towards the poles when projecting to the Cartesian coordinates (right).
The issue is that the cos mapping of the elevation has an uneven step size in Cartesian space, as you can easily verify: cos^{'}(x) = sin(x).

The solution is to simply sample the elevation in the Cartesian space instead of the spherical space – i.e. sampling z \in [-1; 1]. From that we can get back to our elevation as \theta = \arccos z:

z = 1 - np.random.rand() * 2 # convert rand() range 0..1 to -1..1
theta = np.arccos(z)

As desired, this compensates the spherical coordinates such that we end up with uniform sampling in the Cartesian space:

Custom opening angle

If you want to further restrict the opening angle instead of sampling the full sphere you can also easily extend the above. Here, you must re-map the cos values from [1; -1] to [0; 2] as

cart_range = -np.cos(angle) + 1 # maximal range in cartesian coords
z = 1 - np.random.rand() * cart_range
theta = np.arccos(z)

Optimized computation

If you do not actually need the parameters \theta, \varphi, you can spare some trigonometric functions by using \sin \theta = \sqrt { 1 - z^2} as

\begin{aligned} x &= \sqrt { 1 - z^2} \, \cos\varphi \\ y &= \sqrt { 1 - z^2} \, \sin\varphi \end{aligned}

Self-built NAS for Nextcloud hosting

With Google cutting its unlimited storage and ending the Play Music service, I decided to use my own Nextcloud more seriously.
In part because Google forced all its competitors out of the market, but mostly because I want to be independent of any cloudy services.

The main drawback of my existing Nextcloud setup, that I have written about here, was missing redundancy; the nice thing about putting your stuff in the cloud is that you do not notice if one of the storage devices fails – Google will take care of providing you with a backup copy of your data.

Unfortunately, the Intel NUC based build I used, while offering great power efficiency did not support adding a second HDD to create a fail-safe RAID1 setup. Therefore I had to upgrade.

As I still wanted to keep things power-efficient in a small form-factor, my choice fell on the ASRock DeskMini series. Here, I went with the AMD Variant (A300) in order to avoid paying the toll of spectre mitigations with Intel (resulting in just 80% of baseline performance).

The photos above show you the size difference, which is considerable – yet necessary to cram two 2.5″ SATA drives next to each other.
Here, keep in mind that while the NUC devices have their CPU soldered on, we are getting the standardized STX form-factor with the A300, which means you can replace and upgrade the mainboard and the CPU as you wish, while with a NUC you are basically stuck with what you bought initially.

The full config of the build is as follows and totals at about 270€

  • ASRock DeskMini A300
  • AMD Athlon 3000G
  • 8GB Crucial DDR4-2666 RAM
  • WD Red SA500 NAS 500GB
  • Crucial MX500 500GB

Note, that I deliberately chose SSDs from different vendors to reduce the risk of simultaneous failure.
Also, while the 3000G is not the fastest AMD CPU, it is sufficient to host nextcloud and is still a nice upgrade from the Intel Celeron I used previously.
Furthermore, its 35W TDP nicely fits the constrained cooling options. Note, that you can limit for Ryzen 3/5 CPUs to 35W in the BIOS as well, so there is not need to get their GE variants.
However, for a private server you probably do not need that CPU power anyway, so just go with the Athlon 3000G for half the price.

Unfortunately, the A300 system is not designed for passive cooling and comes with a quite annoying CPU fan. To me the fan coming with the Athlon 3000G was less annoying, so I used that instead.
Anyway, you should set the fan RPM to 0% below 50° C in the BIOS, which results in 800 RPM and is unhearable while keeping the CPU reasonably cool.

Power Consumption

As the machine will run 24/7, power consumption is an important factor.

The 35W TDP gives us a upper limit of what the system will consume on persistent load – however the more interesting measure is the idle consuption as thats the state the system will be most of the time.

As I already tried some builds with different ARM architectures, we have some interesting values to compare to, putting the A300 build in perspective

BuildCPUIdleLoad
Odroid U3Exynos44123.7 W9 W
Gigabyte BRIXIntel N33504.5 W9.6 W
A300Athlon 3000G6.8 W33.6 W

While you can obviously push the system towards 35W by with multiple simultaneous users, the 7.3 W idle consumption is quite nice.
Keep in mind, that the A300 was measured with two SATA drives operating as RAID1. If you only use one you can subtract 1W – at which point it is only 1.5 W away from the considerably weaker NUC system.

You might now wonder, whether the load or the idle measure is closer to the typical consumption. For this I measured the consumption for 30 days, which totaled at 5.23 kWh – or 7.2 Watts.

Currently, the average price for 1 kWh is 0.32€, so the running the server costs about 1.67€/ Month. For comparison, Google One with 200 GB will set you off 2.99 €/ Month.

Power optimizations

To reach that 7.3 W idle, you need to tune some settings though. The most important one and luckily the easiest to fix is using a recent kernel.
If you are on Ubuntu 18.04, update to 20.04 or install the hwe kernel (5.4.0) – it saves you 4 Watts (11.3 to 7.3).

For saving about 0.5 watts, you can downgrade the network interface from 1Gbit to 100Mbit by executing

ethtool -s enp2s0 speed 100 duplex full autoneg on

Additionally, you can use Intels powertop to tune your system settings for power saving as

powertop --auto-tune

Lecture on Augmented Reality

Due to the current circumstances, I had to record the lectures on augmented reality, which I am typically holding live. This was much more work than anticipated..
On the other hand, this means that I can make them available via Youtube now.

So, if you ever wanted to learn about the basic algorithms behind Augmented Reality, now is your chance.

The lecture is structured in two parts

  • Camera Calibration, and
  • Object Pose Estimation.

You can click on the TOC links below to directly jump to a specific topic.

Camera Calibration

Object Pose Estimation

Exit mobile version
%%footer%%